Adoby: I appreciate your help with this. Unfortunately, your recommendations are quite a bit beyond my skill level at this time. Your backup system, especially the versioning aspects, are very appealing to me. I won't let it go until I have figured it out. Right now, though, I need something set up to protect data I am about to commit to an hc1.
@flmaxey: I'm back up and running, but my os backup wasn't as far along as I had hoped. Still it wasn't a total rebuild. Os is backing up as I write. When backed up, I plan to try the rsync schedule again as lined out in the User Guide. I gather that you have a hand in the creation/editing of the User Guide. In future editions would it be possible to throw a bone now and then to users laboring under Mac machines. When I backup my SD cards I use Terminal. It took me some digging to figure it out: sudo dd if=/dev/disk# of=~/backupSD/hc1omvyyyymmdd.dmg, where disk # is discovered using diskutil list.
Still, it is a marvelous Guide for beginners like me. I'm looking forward to the addition of backing up to a remote machine. If I understand the recovery part of rsync, wouldn't this be an acceptable way to move to a larger drive: rsync to a larger drive on usb and then redirect the shares to it, then unmount drives, shutdown, switch backup to sata and then reboot?
I'm sorry. I should have said that I made a disk image using "dd" about a week ago. It just bothers me to just pull the plug. I know an SD card degrades that way. I had hoped that it would respond at some time, but I figure I have waited long enough. Thanks.
At this point I am still not able to do anything with the hc1. Ssh returns port 22: Operation timed out, and the web gui isn't there for Plex or omv for two days now. Right now I just want to shut it down gracefully and start over with a clean install, but don't know how to. I guess I'll just pull the plug unless someone has a magic wand.
@Adoby thank you for the info. I will look at your backup scheme as soon as I get back up and running. You will have to explain a bit further the scripting and the filesystems organization. I am sorry but I am not familiar with it.
I had my Rsync set to backup my Odroid-hc1 in the wee hours of Monday. Today was my first go. My Cron Daemon notifications woke me early this morning with all kinds of bad news. One message started like this:
mesg: ttyname failed: Inappropriate ioctl for device
rsync: readlink_stat("/srv/dev-disk-by-label-disk2/AppData/Plex/Library/Application Support/Plex Media Server/Media/localhost/0/cdf00387e8d9c0d835bbaf701618a8d2c6004f7.bundle/Contents/Thumbnails/thumb1.jpg") failed: Bad message (74)
rsync: recv_generator: failed to stat "/srv/dev-disk-by-label-disk2/AppData/Plex/Library/Application Support/Plex Media Server/Media/localhost/0/cdf00387e8d9c0d835bbaf701618a8d2c6004f7.bundle/Contents/Thumbnails/thumb1.jpg": Bad message (74) ...
plus much more. A hundred or so of these inter laced with notifications that start like this:
sending incremental file list
AppData/Plex/Library/Application Support/Plex Media Server/Cache/
AppData/Plex/Library/Application Support/Plex Media Server/Cache/CloudAccess.dat
AppData/Plex/Library/Application Support/Plex Media Server/Logs/
AppData/Plex/Library/Application Support/Plex Media Server/Logs/Plex Media Server.log
IO error encountered -- skipping file deletion
AppData/Plex/Library/Application Support/Plex Media Server/Media/localhost/3/0149f228a82afa4b6fd8f23457630b578a39cf7.bundle/Contents/...
And on it goes. There were a number of these mixed in too:
Execution failed Service nginx
Date: Mon, 19 Nov 2018 06:03:44
Description: failed to stop (exit status -1) -- Program '/bin/systemctl stop nginx' timed out after 30 s
And then several like this:
Resource limit matched Service omv1
Date: Mon, 19 Nov 2018 05:51:48
Description: cpu system usage of 97.4% matches resource limit [cpu system usage>95.0%]
There were about 300 emails in all. I am unable to load the web GUI of either omv or Plex. Trying to ssh in times out. Do I just unplug it and throw away the micro SSD and start from scratch?
I'll try it without them then and see how it goes. I am at the early stages of this project and haven't really committed any vital data to my shares yet.
Should I also stick with Samba alone and not worry about setting up Apple Filing too? My Macs seem to be working fine with the Samba shares. I decided to run a Odroid-xu4 with Ubuntu Mate, just for fun to see what the Linux thing is all about. Haven't got it up to sharing yet but I haven't tried very hard. A bit off topic. Sorry.
I set it up with the /usr/bin/rsync path and it seems to be working fine. Give me a clue: how do I tell if I'm getting a good copy on the backup drive? Total novice on this end; probably should take a class in Linux command line.
I am pretty sure the unmounting is related to customizing spindown and such. This time around I just left the options at disable when setting up the drives. That is what I read in a post (I linked to it above).
I don't intend to share with this server. Just me. All the machines on the LAN are mine. But I'm open to suggestions. Never know what may change in the future.
Thanks for the info.
Try to stay away ACLs.
Do you mean shared folder ACLs? Can you please explain why? I have found that in life the devil is in the details, and little things like ACL settings can mean the difference between success or failure in setting up a home NAS.
I guess I spoke too soon. My disk refused to stay mounted/referenced. I stumbled across this thread that explains that there is something amiss with the odroid-hc2 sata firmware that is incompatible with enabling anything on physical disk properties. I haven't had time to test this out, but will soon.
/usr/bin/rsync -av --delete /srv/dev-disk-by-label-disk1/ /srv/dev-disk-by-label-disk2
I was just following the example given on page 62 of the OMV User Guide. As I said above after reformatting my data drive & re-setting up my shared folders, it worked fine. But in the future I will use the full path. Thanks for the tip.
I'm not sure what was going on, but I deleted all of my shared folders and such so that I could reformat the hard drive, then recreated the users, folders, and scheduled jobs — then it all went right. Obviously, I haven't committed any real data to the beast yet so deleting and recreating wasn't a big deal. Still curious what caused it to act up like that.
I am trying to set up a scheduled Rsync job using the OMV Users Guide (page 61 & following). Here is the job I created:
Screen Shot 2018-11-12 at 8.58.47 PM.png
When I run the job I get the following error:
Screen Shot 2018-11-12 at 8.51.11 PM.png
Going back to my file systems I discovered that both of my disks are not referenced:
Screen Shot 2018-11-12 at 8.13.08 PM.png
How can that be when I have five shared folders attached to disk1?
Screen Shot 2018-11-12 at 8.12.39 PM.png
What am I doing wrong?
Thanks for the terminal info. I am very weak on command line. I have been saving every little snippet I can find and building a cheat sheet.
I gave up on my Airport Extreme and bought a Netgear R7000 router. Being a die-hard Mac guy I never thought I would but I did. There was not enough control over the fine details with the AE.
Also, sadly, I gave up on trying to get Nextcloud to work on OMV. I found NextcloudPi setup on an Odroid-hc2 to be so much easier, still using DuckDNS and Letsencrypt. I have decided to use my OMV for just a LAN server. Trying to figure out rsync right now.
under Advanced settings I had Use internal DNS checked and Use received DNS with user-entered DNS unchecked. So I changed it to the below:
Screen Shot 2018-11-03 at 7.49.14 AM.png
Under Basic Settings/Network I had Use DHCP checked. I now have it as below, unchecked.
The error was in my router settings.
@Nefertiti, so with you rewriting your config.php file to read "nextcloud.your_domain.duckdns.org". Did you write it that way in all three locations?
1 => 'nextcloud.your_domain.duckdns.org',
'overwrite.cli.url' => 'https://nextcloud.your_domain.duckdns.org',
'overwritehost' => 'nextcloud.your_domain.duckdns.org',
And in your browser, does it show up as https://nextcloud.your_domain.duckdns.org? And does it still work.
I also noted that in nextcloud.subdomain.conf file that is the way it shows to insert those lines, but it also adds a line 'trusted_proxies' => ['letsencrypt'],. Did you add that also?
I have finally gotten past the letsencrypt container with success, but cannot get nextcloud to connect, and when I try to connect locally I am redirected to the duckdns.org address with no access there either.
@Nefertiti the step you describe above comes after the step involving the command "docker logs -f letsencrypt" which basically validates the setup of the letsencrypt container. I have never made it to the section you are describing now.
One of the problems I have is when I delete the container to start over, the folders/files remain behind on the server, and I am unable to remove them. If I start a new container will those folders/files be written over? To be sure, I flash a new image on a SD card and start from scratch.
I still believe there is something set wrong in my router. Thinking my Airport Extreme Base Station had very little user control over the settings, and despite @TecnoDadLife's advise against it, I purchased a Netgear R7000 router and installed AdvancedTomato firmware on it. I love the thing, and all of my computers do too. No problem connecting. The blasted thing has soooo many settings I fear I have missed some little check box or some such. I have Googled "Tomato firmware manual" to try to figure out if I have overlooked something, but I haven't found anything.
After setting up a static IP in network settings, and forwarding that static IP in port forwarding, and reducing 2048 to 1024, I still get the following error on a brand new clean install in which Nextcloud was working perfectly well. I just rebooted OMV and Nextcloud still works:
- The following errors were reported by the server:
Timeout during connect (likely firewall problem)
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix
your settings and recreate the container