You can change the port that OMV runs on.
if you had a reasonably recent backup... the most this is really costing you is inconvenience and time... and it will be a lesson you won't soon forget
Two points that just hit it:
Having a solid Backup and a Lesson learn, not only to the person that it happened but also to us not-so-knowlegeable-"newbies" that sometimes don't take measures due to either not knowing or weren't advise to do it.
This makes you think that it's not just a matter of having a nice server/NAS/Home Cloud that can be used from anywhere in the world but also to try to keep things solid enough to prevent hiccups.
Thank you to all of the inputs on this post so far,
So you have a link vor 64 Bit Raspian, I can find only a Beta Version.
I thought I mentioned in my comment, sorry.
The 64Bit version it's still considered as BETA (don't really understand why because it's really "stable") but it runs perfect with no issues.
You can download it and then flash your SDcard from here:
Raspian ist ein 32 Bit System. Kann es sein, dass nur bis 2 TB erkannt wird also zwei Partitionen angelegt werden sollten?
As I know it, as long as it's GPT, ext4 hasn't the 2TB limit on the partition. SOURCEQuote
Ext3 filesystems used 32-bit addressing, limiting them to 2 TiB files and 16 TiB filesystems (assuming a 4 KiB blocksize; some ext3 filesystems use smaller blocksizes and are thus limited even further).
Ext4 uses 48-bit internal addressing, making it theoretically possible to allocate files up to 16 TiB on filesystems up to 1,000,000 TiB (1 EiB). Early implementations of ext4 were still limited to 16 TiB filesystems by some userland utilities, but as of 2011, e2fsprogs has directly supported the creation of >16TiB ext4 filesystems. As one example, Red Hat Enterprise Linux contractually supports ext4 filesystems only up to 50 TiB and recommends ext4 volumes no larger than 100 TiB.
But if you're having issues doing it on OMV GUI, try to do it via CLI, with fdisk.
sudo fdisk /dev/sdX (where X is the drive letter).
After you create the partition, format it with sudo mkfs.ext4 /dev/sdX1 (where X is your drive letter)
OR, if you still starting to install OMV (or the system), just redo the installation but instead of using the 32Bit version, use the 64Bit of Raspbian Lite.
The install instructions is the same except you use the img for 64Bit.
Your OMV/RPi will run really nice (from personal experience, )
Is your drive initialized as MBR or as GPT?
To check it run sudo fdisk -l /dev/sda
Been following this topic to be more mindfull about security. (Although I try to block all I can think off).
massenzio I'm thinking that you don't have the "fail2ban" service active, correct?
ryecoaaron Would the "fail2ban" service be enough to prevent the OP situation? I'm almost certain that my port 22 is blocked on my router but am trying to cover all angles I can think of.
Also, the first thing I do, when configuring OMV ssh access is to disallow "root" access.
Maybe, crashtest can make an review on the install guide and focus some topics to this issue : a "checklist" of some of the most common "failures" that newbies or not-so-knowlegeable users might have that will lead to something as it happened to OP.
I have the USB-C version of that box and it works correctly with the power management from OMV.
Never used/needed hd-idle.
The way it's configured in OMV is:
The 2 disks are running in Single Mode with a BTRFS RAID1 filesystem that makes both spin up with a slight delay when there's data written to it.
But after a while, they spin down normally and stay that way until there's some activity: either access to the data or the cronjobs for cleanup or maintenance.
The thing that I had to be sure was that there was nothing constantly calling up the disks so I only have a SMB share (that I don't use that much) and the Nextcloud DATA.
When at home, me and the Wife have our phones in Sync with NC which makes it spin up and down quite often (due to taking pictures with the phone and almost instantly, it uploads them to NC), but during the "dead" hours, it is pretty much silent.
For starters, just try to remove all access to the disks, and see if they aren't beeing call.
The disks should spin down normally.
Maybe the IP changed?!?
Or the port reset?
If you already have backups, and repeloyed the stack, updating the images (both NC and DB) then they are already on latest versions.
You can run the WebUpdater on NC webGUI to update to latest version.
But to be sure of this, just check on Portainer, what versions you have of the images:
On Portainer, go to "Containers" and click on the DB (in my case is mariadb):
Then scroll down thill you see the version of it (your's might show different since I'm using the "alpine" version so it won't clash with armhf on RPis)
As you said, you're on PC so assuming x86_x64 version:
Post here what version are you on.
If it's already on latest version, then you only need to update the NC,
Aren't they the same: a webfront to docker?!?
I always assumed that you either use Portainer or Yatch, NOT both but I may be way offtrack.
Also, since I don't use either, I'm not the best person to speak...
Isn't it possible to set the binds rw?!?
I'm on phone so, not easy to confirm.
I have been following this thread for about a month in an attempt to get nextcloud up and running using duckdns.
At this point I can get to the nextcloud screen only if I enter: https://myomvip:443/nextcloud..
However, I receive a forbidden message blocking me from accessing nextcloud when I enter: https://mydomain.duckdns.org/nextcloud.
I suspect there is something in my php file that needs to be changed:PHP
Any suggestions? thanks.
Beeing blocked by external access means your SWAG/duckDNS config is not OK.
Post your YML here, hiding sensible data, so to have a better idea of where it's failing.
Also the output of the logs can help:
docker logs -f swag
docker logs -f nextcloud
hello everyone !
i have been using nextcloud for a couple of years with no problems until i updated not knowing that you can't jump versions...
so i had to reinstall everything, unfortunately my previous method explained by DBtech is no longer working, so i have been following this guide.
the only problem is that i would like to access my nextcloud directly without a subdomain or a subfolder, as in directly go to https://example.duckdns.org instead of something like https://nextcloud.example.duckdns.org.
that's how it was working before but it was using letsencrypt instead of swag i don't know if that makes a difference.
this is my what i tried but unfortunately i still have the welcome to your swag site page instead of nextcloudPHP
The same as above, your YML will help.
But it's only half a solution because it only works for the PC where the hosts file is located.
Yes, it's a pain that you need to do it in all machines but that's the way I found to do it.
Maybe someone else will give other possibilities.
Or, maybe you can try with adguard via docker as a DNS server and you set the DNS ip to it on the router.
Honestly, don't know how to do it. I only use adguard to block the ads to specific machines and NOT as a DNS server for the whole network.
Check the adguardhome github to see if it gives you any solution/ideas.
Small NOTICE for those running Nextcloud with "subfolder" (NOT subdomain) access:
If no edits were done to the original, to update the the file, you'll need to delete your conf and restart SWAG and then rename the new sample.
rm -f ...swag/config/nginx/proxy-confs/nextcloud.subfolder.conf
docker restart swag (you'll lose access to NC but SWAG will download the new sample)
cp ...swag/config/nginx/proxy-confs/nextcloud.subfolder.conf.sample ...swag/config/nginx/proxy-confs/nextcloud.subfolder.conf
docker restart swag
docker logs -f swag (check that no errors occur and see if you regain access to NC)
If you previously edited the file, you'll need to redo those edits (take notes of them before deleting the file).
Apply those edits the the new file after doing the above update.
What version shows on the Nextcloud page "settings/admin/overview"?
The latest is running on v21.0.3 (and soon to become v22):
The way to upload Nextcloud (on docker) is to first redploy the stack (updating the image) and then update Nextcloud via the webpage.
Your mariaDB also needs to be updated.
But first make sure you have a working backup of all, in case it goes sideways,
I may have to make a new new Nextcloud container. I assume the database and external data files would all still be compatible with the new Nextcloud?
As long as you point your folders to the same place, yes.
But it's not only a matter of updating the image, you also need to update Nextcloud on the webupdater (or with the "occ" command).
Only issue you might have is if you're running mariaDB on armhf.
It will mess your DB since NC v21 doesn't work with mariaBionic (which is (still) used on the armhf version).
Before anything, make BACKUPS of the DB and containers configs.
To keep it simple:
What hardware are you using?
How do you launch your stack? Portainer? Docker-compose CLI? (post your YML with masked sensible info: PW, URLs, etc)
What versions of NC, mariaDB, swag/let'sencrypt are you running?