Beiträge von demslam

    the system was updated. Issue seemed to appear when the a XFS file system had an issue and it system forced unmounting of the file system. I rebooted and the file system is section is back.

    I should add all the HD that snapraid is going through are connected to the system via (3) SAT2-MV8, PCI-X cards. Shortly after the kernel error i get a "kernel:[ 5970.560781] NMI watchdog: Watchdog detected hard LOCKUP on cpu 0"

    I recently did a fresh install of OMV4 (updating from OMV2), i am doing a mergerfs with snapraid for storage. I am having an issue when i run SnapRAID that appears to be a kernel bug. It always appears to be at the 20% of the snapraid sync. I had never had the issue in OMV2 so i am going to try and use a backported kernel, temporarily. Does anyone have any idea how i could fix this issue.


    So figured i would share this experiences
    I had updated from OMV 2.x to OMV 4.x via a fresh install.
    OMV 2 was able see some existing partitions, but in OMV 4 the partitions where not being seen. After digging into things (and remember from way back) those partitions that were missing where from Hard drives that were part of an old zfs pool. I was able to confirm this by running "lsblk -f" where the FSTYPE indicated it was a member of a zfs pool. Luckily some of the disk i had in my NAS were never part of a pool so I mounted the old zfs pool disk via "mount -t xfs /dev/sdX1 /mnt/sdx1" then moved data off of said drives and ran "wipefs --all /dev/sdX" to remove the FSTYPE and created a new partitions.

    Snapraid automated email bottom from Sun Jul 16
    2tb_01 0% | 2tb_02 33% | ******************* 2tb_03 9% | ***** 2tb_04 3% | * 3tb02 0% | 3tb01 0% | 3tb03 0% | 5tb01a3 0% | 5tb02b3 0% | parity 2% | * 2-parity 2% | * raid 29% | ***************** hash 17% | ********** sched 1% | misc 0% | |____________________________________________________________ wait time (total, less is better)

    Everything OKSaving state to /media/49e74ac0-8db0-45ba-9a49-bedad267998f/snapraid.content...Saving state to /media/6399c001-154d-4233-885b-e910f79783e7/snapraid.content...Saving state to /media/f6371aca-cc94-4a81-bf05-dffea0d33382/snapraid.content...Verifying /media/49e74ac0-8db0-45ba-9a49-bedad267998f/snapraid.content...Verifying /media/6399c001-154d-4233-885b-e910f79783e7/snapraid.content...Verifying /media/f6371aca-cc94-4a81-bf05-dffea0d33382/snapraid.content...SnapRAID SYNC Job finished - Sun Jul 16 02:27:36 PDT 2017----------------------------------------SnapRAID SCRUB-Cycle count (7) not met (4). No scrub was run. - Sun Jul 16 02:27:36 PDT 2017

    Yours, SnapRAID-diff script



    Snapraid automated email bottom from Sat Jul 22
    2tb_01 0% | 2tb_02 32% | ******************* 2tb_03 11% | ****** 2tb_04 1% | * 3tb02 0% | 3tb01 0% | 3tb03 0% | 5tb01a3 0% | 5tb02b3 0% | parity 0% | 2-parity 0% | raid 35% | ******************** hash 17% | ********** sched 0% | misc 0% | |____________________________________________________________ wait time (total, less is better)

    22752 file errors 0 io errors 0 data errorsSaving state to /media/49e74ac0-8db0-45ba-9a49-bedad267998f/snapraid.content...Saving state to /media/6399c001-154d-4233-885b-e910f79783e7/snapraid.content...Saving state to /media/f6371aca-cc94-4a81-bf05-dffea0d33382/snapraid.content...Verifying /media/49e74ac0-8db0-45ba-9a49-bedad267998f/snapraid.content...Verifying /media/6399c001-154d-4233-885b-e910f79783e7/snapraid.content...Verifying /media/f6371aca-cc94-4a81-bf05-dffea0d33382/snapraid.content...SnapRAID SCRUB Job finished - Sat Jul 22 06:14:48 PDT 2017----------------------------------------

    Yours, SnapRAID-diff script

    /etc/fstab
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sdl1 during installation
    UUID=bdc068df-be37-4d01-974d-e317750a35c1 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sdl5 during installation
    UUID=a3ed0cca-62bb-436e-b530-f8e45db25c5f none swap sw 0 0
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    /dev/sr1 /media/cdrom1 udf,iso9660 user,noauto 0 0
    /dev/sda1 /media/usb0 auto rw,user,noauto 0 0
    # >>> [openmediavault]
    UUID=adfc7388-4210-42ee-9013-f1546bf4bfe3 /media/adfc7388-4210-42ee-9013-f1546bf4bfe3 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=49e74ac0-8db0-45ba-9a49-bedad267998f /media/49e74ac0-8db0-45ba-9a49-bedad267998f ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=6399c001-154d-4233-885b-e910f79783e7 /media/6399c001-154d-4233-885b-e910f79783e7 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=f6371aca-cc94-4a81-bf05-dffea0d33382 /media/f6371aca-cc94-4a81-bf05-dffea0d33382 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=07af485c-367a-4b5c-a1f0-fb3050720d24 /media/07af485c-367a-4b5c-a1f0-fb3050720d24 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=ed5b0af4-7d68-4069-824c-b3f1642406a7 /media/ed5b0af4-7d68-4069-824c-b3f1642406a7 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=33a402e0-d971-4f51-a10c-0f14314ad311 /media/33a402e0-d971-4f51-a10c-0f14314ad311 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota .group,jqfmt=vfsv0,acl 0 2
    UUID=1a3a3379-f70e-4f7c-8fa1-4b234fd2a0a4 /media/1a3a3379-f70e-4f7c-8fa1-4b234fd2a0a4 xfs defaults,nofail,noexec,usrquota,grpquota,inode64 0 2
    UUID=c5121cf7-715c-46d8-afc9-6dc2d413b528 /media/c5121cf7-715c-46d8-afc9-6dc2d413b528 xfs defaults,nofail,noexec,usrquota,grpquota,inode64 0 2
    UUID=936f3d1a-4886-4893-8505-d8e930825778 /media/936f3d1a-4886-4893-8505-d8e930825778 xfs defaults,nofail,noexec,usrquota,grpquota,inode64 0 2
    UUID=5fd407a1-4899-4eeb-87b3-c6f6b6ca73b9 /media/5fd407a1-4899-4eeb-87b3-c6f6b6ca73b9 xfs defaults,nofail,noexec,usrquota,grpquota,inode64 0 2
    UUID=d60d3cc6-162e-46b3-9e9a-0e9019dc6aaa /media/d60d3cc6-162e-46b3-9e9a-0e9019dc6aaa xfs defaults,nofail,noexec,usrquota,grpquota,inode64 0 2
    /media/07af485c-367a-4b5c-a1f0-fb3050720d24:/media/f6371aca-cc94-4a81-bf05-dffea0d33382:/media/6399c001-154d-4233-885b-e910f79783e7:/media/49e74ac0-8db0-45ba-9a49- bedad267998f:/media/936f3d1a-4886-4893-8505-d8e930825778:/media/1a3a3379-f70e-4f7c-8fa1-4b234fd2a0a4:/media/c5121cf7-715c-46d8-afc9-6dc2d413b528:/media/d60d3cc6-16 2e-46b3-9e9a-0e9019dc6aaa:/media/5fd407a1-4899-4eeb-87b3-c6f6b6ca73b9 /media/26371579-5169-4eb8-852a-31ff6f14626e fuse.mergerfs defaults,allow_other,category.creat e=mfs,minfreespace=4G 0 0
    /media/26371579-5169-4eb8-852a-31ff6f14626e/media /export/media none bind 0 0
    /media/26371579-5169-4eb8-852a-31ff6f14626e/winshare /export/winshare none bind 0 0
    # <<< [openmediavault]


    /etc/snapraid.conf
    # this file was automatically generated from
    # openmediavault Stone burner 2.2.14
    # and 'openmediavault-snapraid' 1.5


    block_size 256
    autosave 0
    nohidden


    #####################################################################
    # OMV-Name: 2tb_01 Drive Label: 2tb_01
    content /media/49e74ac0-8db0-45ba-9a49-bedad267998f/snapraid.content
    disk 2tb_01 /media/49e74ac0-8db0-45ba-9a49-bedad267998f


    #####################################################################
    # OMV-Name: 2tb_02 Drive Label: 2tb_02
    content /media/6399c001-154d-4233-885b-e910f79783e7/snapraid.content
    disk 2tb_02 /media/6399c001-154d-4233-885b-e910f79783e7


    #####################################################################
    # OMV-Name: 2tb_03 Drive Label: 2tb_03
    content /media/f6371aca-cc94-4a81-bf05-dffea0d33382/snapraid.content
    disk 2tb_03 /media/f6371aca-cc94-4a81-bf05-dffea0d33382


    #####################################################################
    # OMV-Name: 2tb_04 Drive Label: 2tb_04
    disk 2tb_04 /media/07af485c-367a-4b5c-a1f0-fb3050720d24


    #####################################################################
    # OMV-Name: 6tb_01_A6 Drive Label: 6tb_01_A6
    parity /media/33a402e0-d971-4f51-a10c-0f14314ad311/snapraid.parity


    #####################################################################
    # OMV-Name: 6tb_02_B6 Drive Label: 6tb_02_B6
    2-parity /media/ed5b0af4-7d68-4069-824c-b3f1642406a7/snapraid.2-parity


    #####################################################################
    # OMV-Name: 3tb02 Drive Label: 3tb02
    disk 3tb02 /media/c5121cf7-715c-46d8-afc9-6dc2d413b528


    #####################################################################
    # OMV-Name: 3tb01 Drive Label: 3tb01
    disk 3tb01 /media/1a3a3379-f70e-4f7c-8fa1-4b234fd2a0a4


    #####################################################################
    # OMV-Name: 3tb03 Drive Label: 3tb03
    disk 3tb03 /media/936f3d1a-4886-4893-8505-d8e930825778


    #####################################################################
    # OMV-Name: 5tb01a3 Drive Label: 5tb01a3
    disk 5tb01a3 /media/d60d3cc6-162e-46b3-9e9a-0e9019dc6aaa


    #####################################################################
    # OMV-Name: 5tb02b3 Drive Label: 5tb02b3
    disk 5tb02b3 /media/5fd407a1-4899-4eeb-87b3-c6f6b6ca73b9


    exclude *.bak
    exclude *.unrecoverable
    exclude /tmp/
    exclude lost+found/
    exclude .content
    exclude aquota.group
    exclude aquota.user
    exclude snapraid.conf*

    Hello,
    I am having an issue w/ snapraid. i have added in 5 more disk to my pool (it was AUFS, but when i added in the new disk i changed over to mergerFS) the pool appears to be work fine. But when snapraid cron job runs i get an email.
    WARNING! All the files previously present in disk '5tb01a3' at dir '/media/d60d3cc6-162e-46b3-9e9a-0e9019dc6aaa/'
    are now missing or rewritten!
    This could happen when restoring a disk with a backup
    program that is not setting correctly the timestamps.
    If you want to 'sync' anyway, use 'snapraid --force-empty sync'.
    when ssh into the machine and check that drive everything is there, and i am still able to access the files that would be on that drive via mergerfs file share.
    I know i should not have changed two things when i did my upgrade (adding disk and changing drive pool method).
    Thanks you for any information.

    Would you mind explaining how to configure this. I like getting the email notification but would also like to have the pushbullet.
    Currently i have notification going through smtp.gmail.com and when i add in "pushbullet@??" to the secondary email it does not allow me to save the configuration. If i need to remove the smtp.gmail.com then what are the new settings.
    Thank you

    Hi,is there a possibility to run a docker container under a certain user? I'm using bittorrent sync and the synced files should be saved as user:users and not as root:users.BTSync is running on my Nas and once a week the files are getting synchronized…


    That depends on how the docker was setup i think.
    I have added "-e PUID=65534" into "Extra args" to get my docker to run as a nobody.
    If you type in "id -u user" to the terminal it should give you a uid, then just add "-e PUID=####" it should work (I think).