Posts by takiyon

    Ever since I started using ZFS for my file system I keep getting these errors. File copy operations stop and need to be restarted. I have tried messing with all kinds of parameters on the ZFS side and even messing using the following in the parameters for Samba. If I don't have any samba parameters I get more crashes. Any help out there?


    SAMBA Parameters

    Code
    socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536 SO_KEEPALIVE
    read raw = yes
    write raw = yes
    max xmit = 65535 
    dead time = 15
    getwd cache = yes
    server multi channel support = yes
    aio read size = 16384
    aio write size = 16384
    vfs objects = dfs_samba4 acl_xattr


    SAMBA Error Email:


    Code
    The Samba 'panic action' script, /usr/share/samba/panic-action, was called for PID 6964 ().
    
    
    
    
    This means there was a problem with the program, such as a segfault.
    However, the executable could not be found for process 6964.
    It may have died unexpectedly, or you may not have permission to debug the process.

    The plugin creates bind mounts in /etc/fstab. systemd *should* do the right thing.


    How does that complicate things? You create an sftp docker on a port with one folder shared with the container. It achieves the separation very well. I use sftp and will probably switch to docker soon.

    You are correct. It SFTP created two entries in the fstab that point to the folders on the ZFS mount. Somehow ZFS does not like it because it will not mount folders on restart that SFTP put in the fstab. I will probably move over to docker I am just not very familiar with how docker works and when I have tried to use it in the past it was just a pain. I get the concept of containers but updating them was a PITA in the past.

    if SFTP binds mounts then that makes sense. Here is what happens...


    Code
    zfs list
    NAME                                 USED  AVAIL  REFER  MOUNTPOINT
    StorPoolMmbr01                      12.5T  2.79T   363K  /StorPoolMmbr01
    StorPoolMmbr01/ApplicationServerFS   465G  2.79T   465G  /StorPoolMmbr01/ApplicationServerFS
    StorPoolMmbr01/BackupDataFS          672G  2.79T   672G  /StorPoolMmbr01/BackupDataFS
    StorPoolMmbr01/DownloadsFS           192G  2.79T   192G  /StorPoolMmbr01/DownloadsFS
    StorPoolMmbr01/MediaAlbumsFS         386G  2.79T   386G  /StorPoolMmbr01/MediaAlbumsFS
    StorPoolMmbr01/MediaCollectionFS    9.56T  2.79T  9.56T  /StorPoolMmbr01/MediaCollectionFS
    StorPoolMmbr01/SoftwareFS           1.04T  2.79T  1.04T  /StorPoolMmbr01/SoftwareFS
    StorPoolMmbr01/UserDataFS            197G  2.79T   197G  /StorPoolMmbr01/UserDataFS


    I broke up everything into different datasets


    when I see the root file system this is what I see...

    As you can see StorPoolMmbr01 is the mount point of my ZFS pool.


    These are the folders under StorPoolMmbr01.


    Code
    drwxr-xr-x  4 root root  4 Sep 28 01:49 ApplicationServerFS
    drwxr-xr-x  7 root root  7 Sep 28 02:10 BackupDataFS
    drwxr-xr-x  5 root root  5 Sep 30 08:47 DownloadsFS
    drwxr-xr-x  4 root root  4 Sep 28 02:18 MediaAlbumsFS
    drwxr-xr-x 10 root root 10 Sep 28 02:19 MediaCollectionFS
    drwxr-xr-x  3 root root  3 Sep 28 02:19 SoftwareFS
    drwxr-xr-x  5 root root  5 Oct  1 08:06 UserDataFS

    The problem is that when the machine is restarted the folders underneath UserDataFS and MediaCollectionFS are never empty on startup. When ZFS tries to mount the volume and all the datasets it fails on those two. I think this is because SFTP somehow puts folders under those that I have to delete. If those folders are left there bound then that makes sense, and would cause ZFS to fail to mount UserDataFS and MediaCollectionFS on startup. How would I know if SFTP is cleaning up during system shutdown? Perhaps there is something I can do to try and workaround. I don't want to start complicating things by using docker containers.

    So I have been banging my head on this for a while. Basically I have moved my entire 20TB ext4 pool to a backup server and then started the process of moving everything into a new 20TB ZFS Pool. Crazy thing is that after hours and hours of trying to figure out why 2 of my ZFS datasets disappear I uninstalled SFTP. Once I restarted the ZFS unmounted and mounted as normal. Apparently SFTP does something using tmpfs and it conflicts with the mounts that ZFS uses. Aside from just uninstalling the SFTP plugin are there any ideas on how to get this working or any alternatives. If alternatives are suggested they must be encrypted links.


    Thank You, ALL!

    I did not realize. I thought because it was a plugin based install that it did not automatically update itself. If that is the case I will stick to the urbackup plugin. My apologies.

    The plugin doesn't need to be updated. The urbackup package itself *could* be updated more frequently. Are you running into problems not having the the bleeding edge version? You will have the same problem running straight Debian - you will have to check for updates all the time, download manually and install with dpkg. If there is an important bugfix, someone could just ping me to let me know there is a new version (and hopefully test the update to be sure it is working well).

    Please share how you updated. The latest version is 2.2.11 and I am hoping to consolidate my separate backup server to my ESXI machine. I may choose a straight debian install with urbackup due to the omv plugin not being updated very frequently.

    ryecoaaron:
    Hey,
    I updated today in the evening my OMV machine with the new urbackup server, all successfully runs!
    Many thanks and best regards!

    Has anyone figured this out yet? I keep getting these emails.


    Code
    /etc/cron.daily/logrotate:
    mysqladmin: connect to server at 'localhost' failed
    error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)'
    error: error running shared postrotate script for '/var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/mariadb-slow.log /var/log/mysql/error.log '
    run-parts: /etc/cron.daily/logrotate exited with return code 1

    Is there a way in the UPS plugin to run a bash script when the server is running on battery power? I would like to send commands to other vms to shutdown. Is this feature in the works for the next version of the plugin?

    As much as I want to upgrade I am wondering if there is any differences in versions. I installed OMV 4 on a virtual machine and it looks and feels the same. It kinda feels like the same OMV system just on top of a new Debian refresh. My question, is there any compelling reason to upgrade to OMV 4 right now? Side note we need to have a theme selector the new UI is such a pain and constantly cuts off text. Please bring back the old UI or have it user selectable.

    UrBackup is using hardlinks with dedublication. Isn't that single instance storage?

    Yes, for file backups that's fine but for disk images us backup does not deduplicate. ZFS does block level deduplication so that two full backups of disk images are not eating up double the space.


    Sent from my Nexus 6P using Tapatalk

    So I finally bit the bullet and started using UrBackup. Finally dumped WHS2011 as a backup solution. However, since UrBackup does not do Single instance Storage I decided to run a separate box for just UrBackup on top of ZFS. Its running OMV 3 with UrBackup, and ZFS, data DeDuplication and compression turned on so as to mimic WHS2011 using less space for client images. So far working Have not tried to do a restore yet. Hopefully this weekend. Thank you all for your suggestions and your help. :thumbup:

    So I installed the latest version but there is a problem with phpvirtualbox. When I login I get a SOAP error *see below*


    Problem is I don't know what to do with it. I was a software dev for year but not for php or anything that looks like C++. However, my CPU is an Intel Xenon E5620 and my backup server is an AMD Opeteron 1354. On my Intel box It looks like the code it bombing while trying to get the description of my CPU so I commented out line 4880 in vboxServiceWrappers.php and no issues so far on my Intel box. I don't know how to fix it, but otherwise everything else works fine. Funny thing is my other system is running the same setup has no issues from the start without line 4880 commented out. Does anyone have any ideas???


    Attached are screenshots of what my interface looks like now on both machines. Note my Intel box missing the CPU Description. Can this be fixed??? Or does anyone have any ideas of how to fix it???


    Thank you all in advance.

    So, In my last server which was WHS-2011 I used my Blu-Ray recorder to backup my most important data in addition to a hard drive. Does anyone out there recommend a solution for doing this with OMV? I thought about using the VM that runs my WHS2011 client backup, but I am trying to get away from windows. I would rather not use that VM for anything else except the client backup of all my machines. Even if I could make another VM, would the drive be able to be use by VirtualBox as a recordable drive? Anyway I would rather do it natively with linux so any assistance would be appreciated. Would be awesome if there was a plug-in for that but sigh... Oh well... :)

    So I modified and ran the script as and I wonder if there is a way to shrink the file size. Unfortunately my SSD is 60GB and the dd command will just make a 60GB image to my backup drive. After some googling I found the conv=sparse parameter which I tried and it did not help 58GB is what I got. Looks like its back to clonezilla again... :(