Okay, understand.
I can advice you to do two small modifications - if you have time and will of course.
1. I would change "Undelete" to "undelete all" - that will be clearer.
2. I don't know if OMV web framework let you to open prompt window before you open command window, but if it's possible, you could ask user to pass filter, than you could run snapraid fix -m -f PATTERN which would undelete only those files that are matching the pattern (pattern can contain wildcards * and ?) - I would make it as separate option in the menu " undelete files".
Going back to the problem with snapraid itself (not the plugin) - I've just looked into the manual and it is very unclear in terms of maintenance.
There is diff command which makes the list of files with modified timestamps, and there is sync which makes actual snapshot/backup to parity disc.
There is also scrub command which fixes hidden corruption.
I couldn't find any information what should be procedure. You have made "scheduled diff" menu - which runs diff command, but diff is only to create list of modified files - that states manual - it's not syncing anything yet, so it looks sync should be executed afterwards. So I guess this is another modification I could suggest - to run "scheduled sync" just after "scheduled diff" (if it doesn't work that way already).
There is also question of scrubbing. Manual says it should be proceeded on synced drives only.
But won't syncing inject hidden corruption into parity data making whole scrub pointless? Or maybe sync is only syncing those files that were found with diff ? However - if manual says to scrub only if drives are synced, shouldn't "scheduled scrub" sync drives before scrubbing?
And once again - I really don't say your plugin is shitty - it is rather snapraid which runs commands weird way. For example if you use fsck - you have to use -y parameter otherwise you will be prompted for confirmation on every occasion, but in snapraid everything is assumed user really perfectly know what he is doing - this is really danger because I don't think there is anyone who is that confident with handling RAID arrays - maybe people in data centers only. Everyone else is looking into forums, manuals for advice and trying - and making mistakes unfortunately.
Posts by amikot
-
-
I don't say you have done shitty job - you have done job from perspective of your knowledge not thinking to make it obvious for someone who is just simple user.
This is how people make software in most cases - they think about functions, but not about making software intuitive.
---
Nice you telling me do do plugins myself, but as I remember OMV uses some asp or some framework I don't know. I program in pure php or sometimes java - which is not very useful for OMV.
However, I make software for people who sometimes don't know too much about computers, and I have to think the way they think. That's why when I see programs that expect everyone read manuals - I don't really like them - I'm trying to make my programs simple and would like others to do the same.
If I would have to read and remember manual of every software I use, I would have no time to read anything else - especially that I have rather weak memory, and probably after few next manuals I would forget the previous.
I don't say you're making your plugin difficult for purpose - maybe you just can't imagine how it is to not know things that are obvious for you.
----
No I was not asked to confirm if I want to restore found files - Just clicked undelete in your plugin, and it started restoring. It restored everything I deleted ages ago - how it's possible is sync was done last night? Should be because I've set it to be done every night when I was setting up snapraid.
I'm not sure if it didn't also overwrite those files that were modified since last sync. If yes, it made even more problems than just mess -
Okay - I know what snapraid undelete has done - big mess, and overwritten blocks that I could restore from.
Why there is no any confirmation needed before it starts messing the drive? -
I will check "recycle bin" in the SMB settings.
In terms of the Snap Raid - I understand how it works - in terms of structure and how it syncs.
I don't understand how it works when something happen. How and when it detects corruption?
I understand it is not automatic - so the actions need to be triggered. So it differs from normal raid - in normal raid you don't need to do anything - you can even set some spare disks that will be used for replication in case of failure of one of drives.
In snapraid you go to plugins settings and you don't know what is what - options like: Spin for example?What means the Info/Status:
Code
Display MoreLoading state from //snapraid.content... Using 561 MiB of memory for the file-system. SnapRAID status report: Files Fragmented Excess Wasted Used Free Use Name Files Fragments GB GB GB 639565 422 3593 6.9 3328 490 87% sda -------------------------------------------------------------------------- 639565 422 3593 6.9 3328 490 87% 79%|o |* |* |* |* |* |* 39%|* |* |* |* |* |* |* o o 0%|*_____o_o____o______________o____________o_____________o_____________o 41 days ago of the last scrub/sync 6 The oldest block was scrubbed 41 days ago, the median 41, the newest 6. WARNING! The array is NOT fully synced. You have a sync in progress at 99%. 21% of the array is not scrubbed. You have 25373 files with a zero sub-second timestamp. Run 'snapraid touch' to set their sub-second timestamps to a non-zero value. No rehash is in progress or needed. No error detected. END OF LINE█
This plugin looks like someone made it just for himsef - to automate some actions that are obvious for him - no desctiptoons no help texts etc.
I understand - it is right for programmer to make software as he want's but why I should pretend it is great and intuitive and simple.
It's not.
For example - I've found option to undelete - okay - maybe I Will be able to see the files and undelete what I Want.
Instead of that - some scanning started. Great ! no even asking if I want to proceed. No option to abort - who knows what will happen now ? I don't know -
I have snapraid installed, but I don't know how it works. and if it works at all.
I've configured it somehow as found some tutorial, and I know sync is happening during night time.
However I don't know how it works - how it determine what is difference between corrupted file and when it was changed for purpose.
I see it as something that exists, but I don't feel it is really reliable - mainly because I don't see it working.
There is no proper monitoring software - no gui, no webui, no even tool that would display something human readable information.
Problem with programs that have no interface is that you have to type everything manually.
I guessing that snapraid undelete feature will expect me to specify exact name of the file to undelete. I guess it won't display any selector where I could be able to choose which file to undelete?
BTW. I've found that photorec has options where I can set what file types I want to scan for and now scanning the drive - it shows only 8 hours remaining - not too bad. -
I've accidentally deleted (through SMB) new video file which I didn't backup yet.
I didn't save anything on that drive since then, but all tries to un-delete it failed.
I tried testdisk - it didn't display any deleted file in the folder where file was located.
I tried ext4magic - said it can't find any inode.
I tried extundelete - displayed nothing that could be undeleted.
I tried photorec - it started restoring hundred thousands of files, and predicted 32hours of work - I stopped it because I can't block my NAS for that long.
Regardless the situation with my file, I've found that maybe OMV is not really prepared for undeleting data?
In the past I was using Synology (or rather XPnology) and there was an option that everything deleted by any SMB user, was actually going to Trash folder.
My question is - how OMV is prepared for situation when something was accidentally deleted? is there any recovery function? Option? Plugin?
Or just tools mentioned above must be used?
And why my deleted files were not listed by above tools ?
Today I've deleted probably ten big files (one incorrectly) - and none of deleted files was found.
Thanks -
Thanks for help - that fixed the issue.
-
I think the problem lies in a recent update of containerd.io (https://github.com/qdm12/gluetun/issues/2606). I had the same problem which was solved by adding - /dev/net/tun under devices in compose file as suggested above.
Hi, could you tell some more about adding adding - /dev/net/tun under devices in compose file ?
I'm not really good in Docker, and above sentence tells me nothing.
Thanks -
Not sure what you mean. Your system will always install the regular kernel during the upgrade. If you have backports enabled, your system will update to the backports kernel the next time you install updates.
The kernel plugin is meant to install the proxmox kernel. I like the proxmox kernel and it is recommended if using zfs but you don't have to use it.
Depends on the board. SBC boards use their own kernels that might be upgraded with the upgrade.
7.1 kernel? That doesn't exist. Are you having a problem with the system? There is nothing wrong with the stable 6.1 kernel.
I've incorrectly assumed that openmediavault-kernel 7.1.1 is 7.1.1 version of the kernel.
I don't use proxmox nor zfs, I don't even know what I would do with proxmox.
This is probably for nerds
You asked also if I have any problems with the system - I'm using it just few hours now and nothing exploded, so it looks alright. -
So should I install never kernel ?
Is the plugin openmediavault-kernel 7.1.1 right plugin to install to upversion the kernel?
Why other people reporting the updates did have different kernel versions (like 6.3 before update and 6.4 afterwards) ?
I didn't see any other kernel possible to install than 6.1 and now 7.1
Thanks -
Code
Before: root@omv6:~# uname -a Linux omv6 6.1.0-0.deb11.17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1~bpo11+1 (2024-01-05) x86_64 GNU/Linux After: root@omv6:~# uname -a Linux omv6 6.1.0-23-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.99-1 (2024-07-15) x86_64 GNU/Linux
Patient:
Laptop Acer Aspire E1-571
CPU: Intel(R) Core(TM) i3-2348M CPU @ 2.30GHz
MEM: 16GB
Disks: 2 x Samsung SSD 870 QVO 4TB as SnapRAID, some budget SSD 2TB as additional backup, SanDisk USB stick as system drive.
Used plugins/functions: Compose, SnapRAID, Samba.
Upgrade passed without any problems - took maybe 30 minutes. -
Did you activate the docker repository in the omv-extras tab? https://wiki.omv-extras.org/do…cker_compose#installation
That was the cause - I don't know how I've missed that.
Thanks. -
I'm trying to install composer plugin, every time I'm getting error:
** CONNECTION LOST **
After connecting back to Workbench, I see all composer plugin settings, but If I try to set Compose folder (to prepared shared folder location) I'm getting error:Code500 - Internal Server Error Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color compose 2>&1' with exit code '1': debian: ---------- ID: docker_install_packages Function: pkg.installed Result: False Comment: Problem encountered installing package(s). Additional info follows: errors: - Running scope as unit: run-reb0218a7fcd8460586f8c8854cf06954.scope E: Package 'docker-ce' has no installation candidate Started: 23:29:02.666812 Duration: 7461.964 ms Changes: ---------- ID: docker_compose_install_packages Function: pkg.installed Result: False Comment: Problem encountered installing package(s). Additional info follows: errors: - Running scope as unit: run-r0633ed45cba145e8b597f1645a3ac6eb.scope E: Package 'docker-compose-plugin' has no installation candidate E: Unable to locate package containerd.io E: Couldn't find any package by glob 'containerd.io' E: Couldn't find any package by regex 'containerd.io' E: Package 'docker-ce-cli' has no installation candidate E: Unable to locate package docker-buildx-plugin Started: 23:29:10.129572 Duration: 927.888 ms Changes: ---------- ID: docker_purged_package...
I've tried that on omv6.9 and on 6.10 (just installed the update) - exactly the same result.
I would like to use Docker, so if someone have any ideas, what to do etc.
Thanks -
When I try to install ZFS plugin, I'm getting error.
Version
6.8.0-1 (Shaitan)Kernel
Linux 6.1.0-0.deb11.7-amd64
Code500 - Internal Server Error Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; export DEBIAN_FRONTEND=noninteractive; apt-get --yes --allow-downgrades --allow-change-held-packages --fix-missing --allow-unauthenticated --reinstall install openmediavault-zfs 2>&1' with exit code '100': Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-headers-amd64 : Depends: linux-headers-6.1.0-0.deb11.9-amd64 (= 6.1.27-1~bpo11+1) but it is not installable E: Unable to correct problems, you have held broken packages.
-
Single backup might not but borgbackup makes more than one backup. So, it is very reliable.
zfs can't protect against corruption any better than snapraid. rsync would give you one backup. Borgbackup could give you many backups at not much more space because it dedupes and compresses.
I think you need to research why you think zfs is better and snapraid is only ok. Read the snapraid comparison which compares snapraid to zfs and other raid options. https://www.snapraid.it/compare In particular, look at the "Fix silent errors" section since you seem to worry most about it.
My NAS is old laptop with two SATA ports - I could maybe connect something via USB2.0, but I don't want to.
My backup is held in the drawer on six HDDs while in the NAS I have two SSDs.
In fact my NAS has two roles:
1. It is home media server (including photos and videos from the past) - these data are not on any other PC, so I'm keeping these data on those six HDDs
2. It is, a backup and swap-hub for family laptops/PCs/phones - these data don't need to be protected as they are having copies on the laptops.
That's why I don't care too much about having constant backup agent running, but I would like to avoid silent corruption.
Specially my tens of thousands of family photos from past 25 years would be stupid to loose. To be honest, I think that some of them may be corrupted already.
It is good to have backup of everything, but I can't imagine checking thousands of images and restoring broken ones from the backup. That's why I would prefer some automation here. -
Snapraid calculates parity to detect changes. If the modified time of the file hasn't changed but a bit is flipped, then corruption has occurred. But snapraid isn't different than raid which means it is for availability and bitrot protection not a replacement for backup. I recommend a backup method that can create multiple backups and protect against bitrot like borgbackup.
I have backup of everything, but since backup has no bitrot protection it's not reliable too.
In my situation silent corruption is biggest danger - that's why I'm considering to convert to my disk to ZFS in RAID1 or ZFS with a rsync backup on second drive which also would be ZFS. RAID1 would give me availability, rsync would give a bit better protection.
I'm considering ZFS, but if you sure that data protection in snapraid is Okay - I'm not sure now.
Another thing, I don't know the cost and risk of using ZFS too. -
I don't know Italian
-
gderf Thanks for reply, however the links didn't answer for all my questions.
Okay - I've found what diff script is and what it does, but still didn't find information how it protects the data and if it really does.
As I understand diff script (or maybe should rather say SnapRAID) can't recognise if the file was deleted/modified by user or by cosmic radiation (or whatever nasty thing), and is using threshold factor to guess what actually happened..
This is a bit ridiculous, because actually this don't protect against hidden corruption at all. Simple singular bit swap can be assumed as done by user, and omitted by the script.
That kind of protection when you believe you have data protected, but you don't is not only weak protection, it is danger lie.
Or maybe I'm wrong - correct me, please
Thanks -
Hi,
I have simple, or maybe rather poor SnapRAID setup of 2 x 4TB SSD's + 64GB system USB stick.
One 4TB disk is for Data.
Second 4TB is for Parity
Content files are kept on Data and System USB stick.
I've set Scheduled Diff with default settings - every Sunday 2:30 am.
Everything seams to be as in the guides, but I don't understand one thing:
This weekly automation is for what? for syncing or for scrubbing?
If after setting everything up, after initial hashing and syncing I've deleted some data or created or changed some files, do I need to wait until Sunday to get everything synced?
Okay, I know I can sync manually, but shouldn't sync be scheduled for more often runs - I don't know - daily, hourly etc.? How to set this up? With normal crontab?
But maybe I don't understand the logic how snapRAID works? To be honest, I'm not sure how snapRAID will check for data corruption if before scrubbing it must be synced?
Won't sync copy corruption from data to the parity disc? If so, how it will be repaired if both are corrupted?
But on the other hand, if I won't sync, scrub will show loads of errors of anything has changed - like when I've deleted something or modified.
Another unknown thing are exclusions - personally I think there is a bug in the plugin.
When I'm adding new exclusion, the file browser asks me to give absolute path (I start selection in root directory). But if I select this way - snapRAID complains the path is wrong.
So I had to set relative path for datadisk only. This worked - or rather snapRAID didn't complain this time, but how do I know it worked? How can I check that excluded directory is really excluded?
Specially if I've excluded this directory after initial hashing and syncing.
Thanks in advance
A -
Unfortunately - I just checked and my current provider uses CG-NAT and the one I'm planning move to also is using this, so this means I'm screwed