Found this...might help with ZFS and docker
it seems docker is trying to start before any of the ZFS stuff is mounted and complete
I had the hardest time getting mine to start with ZFS and docker...I think that might have been the issue all along
Would a link to a folder in your storage array work?
Devs chime in?
I would leave it on to save the writes to the USB (that is what you are trying to save with the flash memory plugin anyway)...using a HDD as swap is still a good thing
I found this...I have a 16G SD card
I had /dev/mmcblk0p2 and /dev/mmcblk0p3 both formatted btrfs
I found this link onlineQuote
Expanding the file system by adding a new disk
A convenient and quick solution to add disk space to an existing btrfs file system is by adding a new disk.
The procedure consists of four steps and the system does not need to be rebooted:
- add a new disk
- rescan the SCSI bus using
- Add the newly added device to the root btrfs filesystem
btrfs device add /dev/sdX /
- At this point the metadata is only stored on the first disk, to distribute (balance) it across the devices run:
btrfs filesystem balance /
and in the control panel it shows both of the block together with all size availableQuote
Then I ran the btrfs filesystem balance /
and it says they are together as one unit even though there are two partitions, it did give a warning that it could take a long time to balance...but it only took a few minutes
maybe that will help others
It seems docker is mucking up the filesystems...disabling seems to work
Like I said...odd thing...with portainer running...the containers are working just fine
Correction...once I started putting containers in..on reboot it failed
I went ahead and just stopped the service from starting all together
Then created a /etc/rc.local file
Put in there to force mount my zfs pool
This...seems to work fine, odd thing is...with docker off but using portiainer, the docker containers are running, but there is no docker to interfere with the filesystem reboot after reboot
Currently have OMV5 working with zfs and docker and all my docker containers are in a filesystem called "appdata" and they are persistent and working well
Heck I could probably just skip the /etc/rc.local file all together
Yeah..it's ugly...improper...but it DOES work!
Idea gotten from here
Also...further digging...proper location to do the -O (overlay mount) change would be in /etc/systemd/system/zfs.target.wants/zfs-mount.service which is actually a link to /lib/systemd/system/zfs-mount.service for systemd type OS's
No need for a depreciated /etc/rc.local file
I think if you make docker start AFTER the unionfs starts and mounts..it will work
I wrote up a find that delaying docker startup helps prevent these services from butting heads
I had the same problem, me and my buddy racked our brains for a few hours...started fiddling and think I might have found a solution... (albeit probably not the RIGHT way to fix it...but a working fix)
It seems that the docker was loading before (or during) the zfs or any such (btrfs) array was fully initialized and mounted by the zfs daemon, causing the bindings and the daemons to pitch a fit and stop working
I tried and tried and tried to figure out why and then started thinking that it might be a timing issue of the services starting, I even went so far as to stop the docker service, manually import the zpool, then restart the service....when that happened it worked fine
Then wrote a /etc/rc.local script to do it "automagically" which is a VERY nasty and brute force way of doing things
well I did a few things
first verified which runlevel I was in
then went into /etc/rc5.d and found S01docker coincidentally ALL the links in there were marked S01 (start sequentially at the same time more or less I guess) and pointed to the files in /etc/init.d
So I just in /etc/rc5.d
which changed it to a larger number and moved it behind all the other processes starting before it
Then I went into /etc/init.d and changed docker to read in the ### BEGIN INIT INFO section
which means it need all services before it loaded/started before it loads
Now on two reboots...all my zfs mounts are there and docker is running happily so it seems that it is persistent
I did remove the /etc/rc.local that I created to stop docker, mount the zfs, then restart docker too...the above fix seems to make docker start last AFTER everything else is done and all the zfs volumes have been mapped
I also put a symlink from /var/lib/docker to my /zfsmount/docker so the data is stored in my large storage array
I hope this helps...
Hi everyone - WarHawk8080 it looks like you are running it native as opposed to under Docker, I have an oDroid XU4 and it is running OMV v4(.x) but I am having a whale of a time translating "raw" Docker config into OMV Docker? TechnoDadLife, I've watched all your tutorials on pretty much every OMV Docker thing you've done but I can't get this one to stick. Any ideas, ladies and gents? Thanks so much
No need to translate. You can also run the commands from CLI. You can also use docker-compose.The container will be visible afterwards in the Docker-gui plugin.
Ah you are correct BTech...I am running it natively forgive me for not mentioning that it is not running in a docker
And thank you macom for translating it for being able to run in a docker (I didn't run mine in a docker because I don't know docker all that well[but running it as a non-root user "boinc" user seems ok])
I've updated it manually.
1. Have Plex plugin installed.
2. Go to https://www.plex.tv/media-serv…nloads/#plex-media-server and select platform from the 'Choose distribution' dropdown. I picked 'Ubuntu (16.04+) / Debian (8+) - Intel/AMD 64-bit' for an Intel CPU and copyed the link to the .deb file.
3. Connect to OMV box via ssh (Putty or similar from Windows)
4. Download the .deb file to a temporary folder on the NAS:
cd /temp; wget https://downloads.plex.tv/plex-media-server-new/18.104.22.1683-782228f99/debian/plexmediaserver_22.214.171.1243-782228f99_amd64.deb
5. Update the package:
sudo apt install ./plexmediaserver_126.96.36.1993-782228f99_amd64.deb
6. Remove the file:
That is it.
Worked like a champ! (other than editing the version I downloaded) Thanks!
I installed the module, it compiled the ZFS modules, and I created a ZFS pool with no issues, been running solid as a rock!
Anything other than doing basic monitoring or seeing how the pool is doing, and adding sub folders, and filesharing, I do everything else command line, which is VERY easy
More or less if it works on debian...it works just fine on OMV
plus there are tons of cheatsheet links out there for ZFS
The DOCS point to this repository that has more info...but by adding OMV-Extras then the ZFS module it does it all pretty much automatically
I would recommend the following Steps:
- enable SSH
- log into your OMV Box
- then simply follow these steps: http://msmhq.com/docs/installation.html
The Minecraft Server Manager is a simply but powerful tool for managing a Minecraft Server.
I got it working on an Orange Pi PC with the above link
However I had to install java using this link
remember you have to modify your eula.txt and change the false to true for it to start
Here is how I got mine working
had to plug away to get it to work...but it is crunching WU
From this site:https://unix.stackexchange.com…ros-and-cons-of-ia32-libs
sudo dpkg --add-architecture i386 <- to add multiarch capability
sudo apt-get install libc6:i386 <- install the i386 libs for processing WU, without it, BOINC and Seti@home fails instantly
from this website: https://boinc.berkeley.edu/wiki/Installing_BOINC#Linux
sudo apt-get install libstdc++6 libstdc++5 freeglut3
it gives replacement packages, install them
install BOINC as the page above
sudo apt-get install boinc-client boinc-manager
sudo chown -R boinc:boinc /var/lib/boinc-client
sudo chown boinc:boinc /usr/bin/boinc
sudo chown boinc:boinc /usr/bin/boincmgr
sudo chown boinc:boinc /usr/bin/boinccmd
make sure you edit
with the ip of the computer that will be remotely controlling the instance on the server
and edit /var/lib/boinc-client/gui_rpc_auth.cfg
with a password (doesn't have to be uber strong unless you want it to be, mine is 123)
To manually attach to a project
boinccmd --project_attach <url> <key of your account>
I found thisQuote
Replacing Disks in a ZFS Root Pool
You might need to replace a disk in the root pool for the following reasons:
- The root pool is too small and you want to replace it with a larger disk
- The root pool disk is failing. In a non-redundant pool, if the disk is failing and the system no longer boots, boot from another source such as a CD or the network. Then, replace the root pool disk.
You can replace disks by using one of two methods:
- Using the zpool replace command. This method involves scrubbing and clearing the root pool of dirty time logs (DTLs), then replacing the disk. After the new disk is installed, you apply the boot blocks manually.
- Using the zpool detach|attach commands. This method involves attaching the new disk and verifying that it is working properly, then detaching the faulty disk.
If you are replacing root pool disks that have the SMI (VTOC) label, ensure that you fulfill the following requirements:
- Physically connect the replacement disk.
- Attach the new disk to the root pool. # zpool attach root-pool current-disk new-disk Where current-disk becomes old-disk to be detached at the end of this procedure. The correct disk labeling and the boot blocks are applied automatically. Note - If the disks have SMI (VTOC) labels, make sure that you include the slice when specifying the disk, such as c2t0d0s0.
- View the root pool status to confirm that resilvering is complete. If resilvering has been completed, the output includes a message similar to the following: scan: resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 13:57:25 2014
- Verify that you can boot successfully from the new disk.
- After a successful boot, detach the old disk. # zpool detach root-pool old-disk Where old-disk is the current-disk of Step 2. Note - If the disks have SMI (VTOC) labels, make sure that you include the slice when specifying the disk, such as c2t0d0s0.
- If the attached disk is larger than the existing disk, enable the ZFS autoexpand property. # zpool set autoexpand=on root-pool
Got it working...crunching WU as we speak...seems I had to install a few i386 librarys to get it working...
I have a writeup on the howto...will post it up when I get home
Quite useful dokumentation! The positive thing with ZFS is that you can do almost everything about the command line.
I don´t want to give up ZFS anymore.
Oh yeah...it's amazing...its like RAID but "automatic" and includes the raidsnap stuff in it (with snapshots and the automatic protections built in)...automatic error correction, ability to check and repair bit creep and more or less maintain itself...
I went with RAIDZ (equivalent of a RAID-5, only "one" parity drive, but they have up to RAIDZ3 which on a HUGE array can provide MORE protection for data than RAID-5 or RAID-6 (think BACKBLAZE)
I'm just so surprised that ZFS isn't a standard option already built in...it seems there is already a package for it (and in debian as well)...the plugin works fine...but the command line really is where the power of the system lies
Found some more documentation
When my SD cards would get corrupted...I would pull them and put them in a laptop running linux...then use gparted to scan the drive for errors
I wonder if you could use a livecd linux and boot that to see if you can access the data on the drives?
Oh man...ZFS is such a GREAT plugin...it runs solid as a rock on my build on OMV 4.0
I did stumble across this
That is OMV's boot partition not your raid. Your two raid drives are /dev/sda and /dev/sdb they are clearly shown under Storage -> Disks to confirm they are your raid drives run blkid then you'll need to create then assemble to get the raid up.
Another way to do this is to attach one drive to a Linux machine via USB and get the data off that way as it's a raid 1 the drives are identical. Another option if you only have a Windows machine is to use Ext2Fsd does this work? I have used it once to recover data from what appeared to be a failed drive on an Ubuntu install, the drive was fine, but for whatever reason Ubuntu wouldn't mount it.
It shows it does in this webpage
I saw somewhere in this forum it said to
A. Add new drive to pool (expand)
B. Wait till it resilvers the pool
C. Remove the bad drive from pool
Haven't come across that issue as yet...but it seemed to make sense