Beiträge von tom_tav

    I know that its generally a bad idea to use symlinks in shares


    Is there another more smart solution to archive this:


    I have on my media servers some directories i like to share for my media devices. These directories are scattered in the main structure:


    e.g.


    AUDIO/blahblah/iTunes/Files

    AUDIO/gluglu/recordings/X

    ....


    I can create a share for each of these dirs, but then i need to connect to X shares on each media device


    What i like to do is to put them together on one share, e.g. MEDIA


    I could create a MEDIA dir, share it and do symlinks to the needed directories. But afaik i can enable symlinks outside of the share just globally. Which i would like to avoid.


    I tried to use the sharedFolderFS, but it didnt work as expected (ZFS volume)


    Any ideas?

    Based on the samba 4.9.x functions and the guide https://wiki.samba.org/index.p…Work_Better_with_Mac_OS_X i discovered a few small things:


    1. the comment above "fruit:aaple = yes" in the smb.conf [global] section is not quite correct:


    # Special configuration for Apple's Time Machine


    technically correct it should be


    # enable Apple's SMB2+ extension codenamed AAPL


    2. for the shares sections you didnt add the "#Extra options" comment line when generating the smb.conf


    3. would be good to add the switch for ea support also in the global section (right now it just exists under shares, can be done via extra options so it works right now)


    and last but not least the really important thing/bug:


    4. inside the [shares] there is "vfs objects =" .... not very useful cause it stays inside the configuration even if you add the needed "vfs objects = fruit streams_xattr" via the extra options and you have to do this for every share cause it overwrites the global definition (if you did one)


    P.S. yes i did see what gets added when you enable Timemachine support (it makes more sense to create a separate TM share with quota). So at least when timemachine is not enabled you just should omit to put the empty "vfs_objects" into the section so the user can configure what he wants via extra options.


    Sometimes you just want OSX compatible shares without timemachine. And if you create a TM share you would like to add the quota most of the time.


    Code

    Code
    [TimeMachineBackup]
    
    vfs objects = fruit streams_xattr
    
    fruit:time machine = yes
    
    #  fruit:time machine max size = SIZE

    Seems the Speeds are ok for this machine. My 3+ days restore w. avg 40mb transferrate could be the overload of the internal ports!?



    Scandisk 64GB SSD (SDSSDP064G) (on the 5th internal Sata Port, 3GB Link enabled w. mod Bios)


    Write zeros, this ssd is slow:

    root@media:/INTERNAL# dd if=/dev/zero of=/testfile bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.0112 s, 89.4 MB/s


    Read testfile with zeros, cache at work:
    root@media:/INTERNAL# dd if=/testfile of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.29201 s, 831 MB/s


    Read movie file, acceptable read performance:
    root@media:/INTERNAL# dd if=/zzzz.mp4 of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.44178 s, 242 MB/s



    ZFS Raid (Z1, no compression, 4x 3TB ST33000651AS):


    Write zeros:
    root@media:/INTERNAL# dd if=/dev/zero of=/INTERNAL/testfile bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.8095 s, 122 MB/s


    Read testfile with zeros, cache at work:
    root@media:/INTERNAL# dd if=/INTERNAL/testfile of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.89859 s, 566 MB/s


    Read movie file:
    root@media:/INTERNAL# dd if=/INTERNAL/VIDEO/MOVIES/zzzz.mp4 of=/dev/nul bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.05504 s, 177 MB/s



    Here are the crosscopy operations, they are slower then the /dev/zero operations above:


    RAID -> SSD, movie file:


    root@media:/INTERNAL# dd if=/INTERNAL/VIDEO/MOVIES/zzzz.mp4 of=/zzzz2.mp4 bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 16.4298 s, 65.4 MB/s



    SSD -> RAID, movie file:


    root@media:/INTERNAL# dd if=/zzzz.mp4 of=/INTERNAL/zzzz2.mp4 bs=1M count=1024 conv=fdatasync,notrunc
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 10.3434 s, 104 MB/s



    P.S. i can saturate the GB connection!

    I have just the IO stats from the source 8TB disk cause the diskperformance is not avail for the ZFS raid:

    The left part (till Fri 20:00) was the operation with rsync (mostly audio files for tracks and albums), from then on with cp (bigger video files).


    First i thought the hdd firmware was throtteling cause it run hot (61 deg C), but after cooling it down to 33 deg C there was no difference.

    geaves How is your ZFS performance on the little HP? Filling the ZFS array i get at max ~40mb avg (with cp, with rsync even less around 30mb avg) with short spikes above that (i restore the data from a local esata connected 8TB drive).


    The backup from the old mdraid was much faster (twice at minimum). I wonder if the Sata HW is not strong enough on this machine.


    I have no compression enabled on this pool cause 98% of the files are media files

    The FreeBSD pool:


    Pool version 5000 w. feature flags:

    zpool get all ZFS_POOL | grep feature@

    ZFS_POOL feature@async_destroy enabled local

    ZFS_POOL feature@empty_bpobj active local

    ZFS_POOL feature@lz4_compress active local

    ZFS_POOL feature@multi_vdev_crash_dump enabled local

    ZFS_POOL feature@spacemap_histogram active local

    ZFS_POOL feature@enabled_txg active local

    ZFS_POOL feature@hole_birth active local

    ZFS_POOL feature@extensible_dataset enabled local

    ZFS_POOL feature@embedded_data active local

    ZFS_POOL feature@bookmarks enabled local

    ZFS_POOL feature@filesystem_limits enabled local

    ZFS_POOL feature@large_blocks enabled local

    ZFS_POOL feature@sha512 enabled local

    ZFS_POOL feature@skein enabled local

    ZFS_POOL feature@device_removal disabled local

    ZFS_POOL feature@obsolete_counts disabled local

    ZFS_POOL feature@zpool_checkpoint disabled local

    tom_tav

    See this -> post (above) for the ZFS command lines I mentioned but didn't originally include.

    Thanks a lot. I think i will skip the compression, the little HPs are no performance monsters and are getting bogged down by plex already.


    Btw. in theory it should be possible to import a existing ZFS pool from a FreeBSD machine, no? (as long as the zfs pool version <= the max version on debain)

    Something different im fighting with at the moment:


    I did backup the raid to an external 8TB hdd with rsync:


    time rsync -aHAXxv --numeric-ids --delete --progress --stats /srv/{source uuid} /srv/{destination uuid}


    Now i just wanted to bring the backup up to date and did rerun it (in dry mode). Well it wants to copy all files again.


    Number of files: 573,807 (reg: 341,491, dir: 206,665, link: 25,651)

    Number of created files: 573,807 (reg: 341,491, dir: 206,665, link: 25,651)

    Number of deleted files: 0

    Number of regular files transferred: 341,491

    Total file size: 7,836,673,846,395 bytes

    Total transferred file size: 7,836,671,442,073 bytes

    Literal data: 0 bytes

    Matched data: 0 bytes

    File list size: 720,805

    File list generation time: 0.001 seconds

    File list transfer time: 0.000 seconds

    Total bytes sent: 27,205,773

    Total bytes received: 1,934,979


    sent 27,205,773 bytes received 1,934,979 bytes 84,100.29 bytes/sec

    total size is 7,836,673,846,395 speedup is 268,924.90 (DRY RUN)


    Could it be the changed ACLs? I thought rsync will just touch it and not transfer the files again

    So you have all 3 USB connected permanently and do a alternating backup from the boot one to the two backup USB drives?

    This would be a ok solution for me, anything where i need to change manually from time to time is not possible/useful for my case.

    Right now i have the system already moved to the SSD but im still tempted to try the ZFS caching. On the other hand, for a media server it doesnt make much sense....



    P.S. is the flashmedia plugin just for usb stick or also useful for ssd?

    So, i need some more input from you guys.


    Im happy with the new OMV and will keep it. Will change the old mdadm raid to ZFS raid.


    Im not sure where to put the OMV base system (its right now on a very old hdd which is already on smart alert):


    1. i put it on the 64GB SSD


    2. i put it on a USB stick and use the 64GB SSD as cache drive for the ZFS pool

    What do you think?

    Ok, did a look into dleiderts scripts. I think it doesnt make any sense for me to start the journey 2-3-4-5, more easy to just start from scratch and reimport my raid. I did modify my OMV back in the days (newer netatalk and other hacks) so update desaster would be around the corner ;)

    For others, this is the post from dleidert:


    Upgrade Scripts for non-interactive major release upgrades (2->3, 3->4, 4->5)


    I will do the backup, setup a new OMV and see if and how it works. Then i can decide if i keep OMV on this machine or if i use xigma as well. In the beginning i liked OMV cause of the use of plex/sickbeard and co which i dont use anymore.

    Main reason which drove me away back then (at least on my big nas) was the lacking support of ZFS and some outdated packages (especially netatalk and samba). On the other hand, xigma became more and more commercial after the name change, i prefer clean open source. And finally, im firm with Linux but not with FreeBSD, another point for OMV.
    I think i will try a USB boot for OMV, and use the little SSD as Cache for ZFS. Lets see...