please consider too ZFS and ZFS send as alternatives
if RAID and ZFS is not backup, So why we use NAS ?
-
- OMV 3.x
- Kreative_Joy
-
-
please consider too ZFS and ZFS send as alternatives
Raul, I don't have enough RAM and this system doesn't support ECC RAM. So I'm not going to consider ZFS just because of that.
This is a refurbished computer that I assembled from "old" parts. Not too old but the Mobo clearly states that it doesn't support ECC RAM.
Without ECC RAM, using ZFS is basically playing Russian roulette with your data.
Otherwise I would gladly give ZFS a try.
In this particular scenario, Snapraid+MergerFS is a much better option. One day when I move to a more up to date hardware I will give ZFS a try.
Now, if only I knew how to make Tar backups, encrypt them with GnuPG, send them to BB2 and send a status report by email or something similar...
Cheers -
Without ECC RAM, using ZFS is basically playing Russian roulette with your data
100% wrong. Especially when you don't want to use ECC RAM or can't afford of course you should use a checksummed filesystem like ZFS.
http://jrs-s.net/2015/02/03/wi…n-ecc-ram-kill-your-data/
Same misunderstanding is about DRAM and ZFS. ZFS needed huge amounts of DRAM. Back in 2009 when we deployed it on Solaris machines. Things have changed. Today you only need huge amounts of RAM if you want to use deduplication since the DDT needs to fit into memory otherwise things slow down. If you don't do dedup everything is fine, simply adjust the amount of memory for the ARC cache (in home environments using almost all RAM for ARC is useless anyway since totally different access patterns compared to enterprise environments).
-
-
(in home environments using almost all RAM for ARC is useless
Could you give a general recommendation how much RAM should be dedicated for the ARC in home environments?
-
100% wrong. Especially when you don't want to use ECC RAM or can't afford of course you should use a checksummed filesystem like ZFS.
http://jrs-s.net/2015/02/03/wi…n-ecc-ram-kill-your-data/Same misunderstanding is about DRAM and ZFS. ZFS needed huge amounts of DRAM. Back in 2009 when we deployed it on Solaris machines. Things have changed. Today you only need huge amounts of RAM if you want to use deduplication since the DDT needs to fit into memory otherwise things slow down. If you don't do dedup everything is fine, simply adjust the amount of memory for the ARC cache (in home environments using almost all RAM for ARC is useless anyway since totally different access patterns compared to enterprise environments).
Wait a minute. I know people that have lost all their ZFS pool data due to the fact that they had only 4GB or RAM (which is what I have right now) and others because they had bad RAM modules, data kept corrupting until it got to a point that they lost it.
Anyways, I only have 3x 1TB hdds on an outdated intel cpu with 4GB. Somehow ZFS doesn't seem like the best approach in my use case. So I'm betting on SnapRaid+MergerFS. But the cloud backups. How do I solve the cloud backups without having to fiddle with a .Net application...?
Cheers -
Could you give a general recommendation how much RAM should be dedicated for the ARC in home environments?
As low as you want of course. What is ARC? https://www.zfsbuild.com/2010/…anation-of-arc-and-l2arc/
This is for servers and 'hot data' and is to ensure most used data remains in fast RAM. Home environments that do not also use their ZFS pools to run virtual machines off are best described as 'cold data'. Something is written to the NAS, then nothing happens for days, then something is read back. An ARC is of almost no use anyway.
ZFS defaults are using all available RAM for the ARC and leaving the OS and every other task one 1 GB. So you waste 3 GB for nothing. Reduce this setting to a reasonable minimum (1GB or 512MB -- you won't make use of ARC anyway) and you're fine. It's a simple tunable and after the necessary adjustment large ZFS pools run fine even on SBC with just 1 GB of RAM.
-
-
I know people that have lost all their ZFS pool data due to the fact that they had only 4GB or RAM
References please. I only know the usual FUD spread by the same FreeNAS forum moderator who is also responsible for spreading the 'ZFS without ECC will destroy not only your data but the whole world' FUD.
If you have just 4 GB of DRAM and you run in a home environment there's no reason to waste 3 GB of your DRAM for a type of cache you can't make use of. Adjust this and you're fine. One problem is that FreeNAS doesn't care about this (tunables in general) and FreeNAS community (or forum moderators) love to blame users for being reluctant to their recommendations.
With ZoL (ZFS on Linux) starting from version 0.7.0 memory utilization is even better since ZoL now uses in memory compression.
I know ... others because they had bad RAM modules, data kept corrupting until it got to a point that they lost it
So what? If you have bad RAM you get data corruption. On any OS with any FS. There's no magic involved. Use crappy RAM and you run into problems. But with a checksummed filesystem you will at least be aware if you scrub regularly.
But here's the problem. One specific FreeNAS forum moderator developed a theory that a ZFS scrub without ECC memory would cause more harm and a single 'bad memory cell' will happily destroy your whole pool since 'bad memory' will corrupt intact data on disk as part of the scrub. I too believed into this BS for quite some time. But it's just that: BS.
So yes, using defective memory will end up with you whining about corrupted data for sure. It's worse with filesystems that have no checksum/hash features since you'll realize always way too late that your memory corrupts data. With ZFS, btrfs and ReFS, their data integrity features and regular scrubs you'll realize the problem earlier and can try to solve/mitigate the problem actively before it's too late (corrupted data spread into all available backup generations).
Again: jrs-s.net/2015/02/03/will-zfs-…n-ecc-ram-kill-your-data/
-
I understand your objections to Mono. But I don't have a problem with running applications that require Mono by running them in dockers. Have you considered running Duplicati in a docker?
-
ZFS defaults are using all available RAM for the ARC and leaving the OS and every other task one 1 GB. So you waste 3 GB for nothing. Reduce this setting to a reasonable minimum (1GB or 512MB -- you won't make use of ARC anyway) and you're fine. It's a simple tunable and after the necessary adjustment large ZFS pools run fine even on SBC with just 1 GB of RAM.
Interesting statement.
So, lowering ARC's RAM usage I could run ZFS pool on my old rig...
I will definitely keep that in mind.I understand your objections to Mono. But I don't have a problem with running applications that require Mono by running them in dockers. Have you considered running Duplicati in a docker?
In all honesty, it's not like .Net/Mono is a disease or something but I would definitely like to stay as far away from it as I possibly can.
Having said that, if there's really no other possible way around this issue, then I would try using Duplicati from the container. Only as a very last resort.
Is it documented?Coming to think of it, can I run file searches on my Duplicati backups? Let's say that I've accidentally erased 10 pictures. Does it have a database with the files that are stored in the backup files (that I would assume to be using some sort of compression and encryption)?
All this because I could use whatever file storage system and I could install and use Rclone with running some upload script. And I could even create a script for the backup/encryption part of the equation. I just wouldn't know how to deal with the logging/alerting part of the equation...
I'll keep looking for ways of doing this, but I can't spend much more time with this. Right now my data is unprotected and I have to start doing something.Cheers
-
-
Dockers, in general, are reasonably well documented. I would say, however, that some basic prior understanding of general docker principals would be beneficial.
As to your specific questions about specific dockers, I can't answer those because I don't use those particular applications. Try them for yourself, and if you are not happy, delete the dockers and move on to something else. Or try reading the documentation that comes with every container. Maybe the answers are there. You can read that stuff without installing anything.
-
That's not what I meant.
This would be my first attempt at Docker/Containerization.
I would assume that OMV4 has support for Docker or that I am free to install it via Apt.
If I do so, how do I set it up in order for that container to have access to the data and only in read mode?
You see, it's a whole project for me...
I'll try to set up a VM with OMV4 to get acquainted with the installation of Docker and Duplicati. I just hope this is not a big puzzle that will take me a very long time to finish. I need to secure my data.
Thanks for the headsup.Cheers
-
There is a Docker plugin for OMV.
The rest of your questions are either answered within the docker plugin itself, the documentation that is available for every container, or in one of the Guides available here on the forum such as:
-
-
Thanks for the headsup.
But in all honesty I still haven't given Duplicati specs a good look.
I need to understand if it does exactly what I need/expect.
I'll drop a line or two here after I do a good reading about it.
Thanks -
Coming to think of it, can I run file searches on my Duplicati backups? Let's say that I've accidentally erased 10 pictures. Does it have a database with the files that are stored in the backup files (that I would assume to be using some sort of compression and encryption)?
This can be done from the GUI of Duplicati.
To restore you select a backup from which you want to restore (different points of time in the past)
Then you can browse to the files you want to restore and then restore. -
In this thread there is link to a manual for Duplicati (first post)
https://forum.duplicati.com/t/what-about-a-manual/1061 -
-
Guys I was looking into my NAS settings and I noticed that the last scrub with my ZFS pool is from 6 months ago D: Should I do a scrib now? Also: do you suggest to do a scrub every week? How can I do that?
-
Guys I was looking into my NAS settings and I noticed that the last scrub with my ZFS pool is from 6 months ago D: Should I do a scrib now? Also: do you suggest to do a scrub every week? How can I do that?
Best practices recommend a month scrub, but deppends of your ZFS use, in my case that is only for share movies and photos to family 2 scrubs a year is enough to dettect possible problems in files.
-
Thanks! What's the command to launch a scrub? I will create a job inside "schedule job"
-
-
zpool scrub nameofthepool.
eg: https://docs.oracle.com/cd/E18…/html/819-5461/gbbwa.htmlhttps://docs.oracle.com/cd/E18…/html/819-5461/gbbwa.html
-
EDIT: wrong topic
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!