Posts by warhawk8080

    I found this...I have a 16G SD card


    I had /dev/mmcblk0p2 and /dev/mmcblk0p3 both formatted btrfs


    I found this link online

    https://www.suse.com/support/kb/doc/?id=000018798


    Code
    I did the 
    btrfs device add /dev/mmcblk0p3

    and in the control panel it shows both of the block together with all size available


    Quote

    Then I ran the btrfs filesystem balance /

    and it says they are together as one unit even though there are two partitions, it did give a warning that it could take a long time to balance...but it only took a few minutes

    maybe that will help others


    Correction...once I started putting containers in..on reboot it failed


    I went ahead and just stopped the service from starting all together


    Code
    # systemctl disable docker

    Then created a /etc/rc.local file


    Put in there to force mount my zfs pool


    Bash
    #!/bin/sh
    /sbin/zfs mount -O -a
    exit 0


    This...seems to work fine, odd thing is...with docker off but using portiainer, the docker containers are running, but there is no docker to interfere with the filesystem reboot after reboot


    Currently have OMV5 working with zfs and docker and all my docker containers are in a filesystem called "appdata" and they are persistent and working well


    Heck I could probably just skip the /etc/rc.local file all together

    Yeah..it's ugly...improper...but it DOES work!


    Idea gotten from here

    https://utcc.utoronto.ca/~cks/…tRestriction?showcomments


    Also...further digging...proper location to do the -O (overlay mount) change would be in /etc/systemd/system/zfs.target.wants/zfs-mount.service which is actually a link to /lib/systemd/system/zfs-mount.service for systemd type OS's


    In the

    Code
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/sbin/zfs mount -a

    change to

    Code
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/sbin/zfs mount -O -a

    No need for a depreciated /etc/rc.local file

    /lib/systemd/system/zfs-mount.service

    I had the same problem, me and my buddy racked our brains for a few hours...started fiddling and think I might have found a solution... (albeit probably not the RIGHT way to fix it...but a working fix)


    It seems that the docker was loading before (or during) the zfs or any such (btrfs) array was fully initialized and mounted by the zfs daemon, causing the bindings and the daemons to pitch a fit and stop working


    I tried and tried and tried to figure out why and then started thinking that it might be a timing issue of the services starting, I even went so far as to stop the docker service, manually import the zpool, then restart the service....when that happened it worked fine


    Then wrote a /etc/rc.local script to do it "automagically" which is a VERY nasty and brute force way of doing things


    well I did a few things


    first verified which runlevel I was in

    Code
    # runlevel
    N 5

    then went into /etc/rc5.d and found S01docker coincidentally ALL the links in there were marked S01 (start sequentially at the same time more or less I guess) and pointed to the files in /etc/init.d


    So I just in /etc/rc5.d

    Code
    #mv S01docker S05docker

    which changed it to a larger number and moved it behind all the other processes starting before it


    Then I went into /etc/init.d and changed docker to read in the ### BEGIN INIT INFO section


    Code
    # Required-Start: $all $syslog $remote_fs


    from

    Code
    # Required-Start: $syslog $remote_fs

    which means it need all services before it loaded/started before it loads


    Now on two reboots...all my zfs mounts are there and docker is running happily so it seems that it is persistent


    I did remove the /etc/rc.local that I created to stop docker, mount the zfs, then restart docker too...the above fix seems to make docker start last AFTER everything else is done and all the zfs volumes have been mapped


    I also put a symlink from /var/lib/docker to my /zfsmount/docker so the data is stored in my large storage array


    I hope this helps...

    Hi everyone - WarHawk8080 it looks like you are running it native as opposed to under Docker, I have an oDroid XU4 and it is running OMV v4(.x) but I am having a whale of a time translating "raw" Docker config into OMV Docker? TechnoDadLife, I've watched all your tutorials on pretty much every OMV Docker thing you've done but I can't get this one to stick. Any ideas, ladies and gents? Thanks so much


    No need to translate. You can also run the commands from CLI. You can also use docker-compose.The container will be visible afterwards in the Docker-gui plugin.

    Ah you are correct BTech...I am running it natively forgive me for not mentioning that it is not running in a docker


    And thank you macom for translating it for being able to run in a docker (I didn't run mine in a docker because I don't know docker all that well[but running it as a non-root user "boinc" user seems ok])

    Worked like a champ! (other than editing the version I downloaded) Thanks!

    I installed the module, it compiled the ZFS modules, and I created a ZFS pool with no issues, been running solid as a rock!


    Anything other than doing basic monitoring or seeing how the pool is doing, and adding sub folders, and filesharing, I do everything else command line, which is VERY easy


    More or less if it works on debian...it works just fine on OMV
    https://unix.stackexchange.com/questions/383566/install-zfs-on-debian-9-stretch


    plus there are tons of cheatsheet links out there for ZFS


    The DOCS point to this repository that has more info...but by adding OMV-Extras then the ZFS module it does it all pretty much automatically
    https://github.com/zfsonlinux/zfs/wiki/Debian

    I got it working on an Orange Pi PC with the above link
    However I had to install java using this link
    http://www.rpiblog.com/2014/03…dk-8-on-raspberry-pi.html
    remember you have to modify your eula.txt and change the false to true for it to start

    Here is how I got mine working
    had to plug away to get it to work...but it is crunching WU


    From this site:https://unix.stackexchange.com…ros-and-cons-of-ia32-libs
    sudo dpkg --add-architecture i386 <- to add multiarch capability
    sudo apt-get install libc6:i386 <- install the i386 libs for processing WU, without it, BOINC and Seti@home fails instantly


    from this website: https://boinc.berkeley.edu/wiki/Installing_BOINC#Linux
    sudo apt-get install libstdc++6 libstdc++5 freeglut3


    it gives replacement packages, install them


    install BOINC as the page above


    sudo apt-get install boinc-client boinc-manager


    sudo chown -R boinc:boinc /var/lib/boinc-client
    sudo chown boinc:boinc /usr/bin/boinc
    sudo chown boinc:boinc /usr/bin/boincmgr
    sudo chown boinc:boinc /usr/bin/boinccmd


    make sure you edit
    /var/lib/boinc-client/remote_hosts.cfg
    with the ip of the computer that will be remotely controlling the instance on the server
    and edit /var/lib/boinc-client/gui_rpc_auth.cfg
    with a password (doesn't have to be uber strong unless you want it to be, mine is 123)


    To manually attach to a project
    boinccmd --project_attach <url> <key of your account>




    https://boinc.netsoft-online.c…150e4ff014f12199a6&html=1

    I found this

    https://docs.oracle.com/cd/E53394_01/html/E54801/ghzvz.html


    • Physically connect the replacement disk.
    • Attach the new disk to the root pool. # zpool attach root-pool current-disk new-disk Where current-disk becomes old-disk to be detached at the end of this procedure. The correct disk labeling and the boot blocks are applied automatically. Note - If the disks have SMI (VTOC) labels, make sure that you include the slice when specifying the disk, such as c2t0d0s0.
    • View the root pool status to confirm that resilvering is complete. If resilvering has been completed, the output includes a message similar to the following: scan: resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 13:57:25 2014
    • Verify that you can boot successfully from the new disk.
    • After a successful boot, detach the old disk. # zpool detach root-pool old-disk Where old-disk is the current-disk of Step 2. Note - If the disks have SMI (VTOC) labels, make sure that you include the slice when specifying the disk, such as c2t0d0s0.
    • If the attached disk is larger than the existing disk, enable the ZFS autoexpand property. # zpool set autoexpand=on root-pool

    Quite useful dokumentation! The positive thing with ZFS is that you can do almost everything about the command line.
    I don´t want to give up ZFS anymore.


    Oh yeah...it's amazing...its like RAID but "automatic" and includes the raidsnap stuff in it (with snapshots and the automatic protections built in)...automatic error correction, ability to check and repair bit creep and more or less maintain itself...


    I went with RAIDZ (equivalent of a RAID-5, only "one" parity drive, but they have up to RAIDZ3 which on a HUGE array can provide MORE protection for data than RAID-5 or RAID-6 (think BACKBLAZE)


    I'm just so surprised that ZFS isn't a standard option already built in...it seems there is already a package for it (and in debian as well)...the plugin works fine...but the command line really is where the power of the system lies


    Found some more documentation
    https://www.howtoforge.com/tut…e-zfs-on-debian-8-jessie/

    When my SD cards would get corrupted...I would pull them and put them in a laptop running linux...then use gparted to scan the drive for errors


    I wonder if you could use a livecd linux and boot that to see if you can access the data on the drives?

    That is OMV's boot partition not your raid. Your two raid drives are /dev/sda and /dev/sdb they are clearly shown under Storage -> Disks to confirm they are your raid drives run blkid then you'll need to create then assemble to get the raid up.
    Another way to do this is to attach one drive to a Linux machine via USB and get the data off that way as it's a raid 1 the drives are identical. Another option if you only have a Windows machine is to use Ext2Fsd does this work? I have used it once to recover data from what appeared to be a failed drive on an Ubuntu install, the drive was fine, but for whatever reason Ubuntu wouldn't mount it.

    It shows it does in this webpage
    https://www.howtogeek.com/1128…-partitions-from-windows/