Do your disks show up in the OMV GUI under the Disks tab, or under the File Systems tab?
Nop!
Suddenly all of the 3 disk are gone. only my sdcard shows up in disks.
Do your disks show up in the OMV GUI under the Disks tab, or under the File Systems tab?
Nop!
Suddenly all of the 3 disk are gone. only my sdcard shows up in disks.
PS Am I writing in the wrong Forum section?
Please help me identify where is best to post. Thank you.
Greetings,
I had my OMV 5 running smoothly for quite a while. on a RP4 4GB.
The pi is on a DeskPi case.
3 HDD:
1 SSD internally
2 external USB HDD (USB3)
Suddenly other devices in my LAN that use OMV shared folder, give me errors.
- I check the OMV DISKS section and I see nothing but SDCARD
- I check fdisk -l and nothing but sdcard comes out
- checked lsub and I do have the USB present (to eliminate the option that USB are not detected anymore)
Bus 003 Device 003: ID 1a86:7523 QinHeng Electronics HL-340 USB-Serial adapter
Bus 003 Device 002: ID 05e3:0610 Genesys Logic, Inc. 4-port hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
What did I try?
I tried to take the external disks out of usb hub and attache them directly to the box usb -> no joy
I tested if the HDD works by mounting them on my linux laptop -> joy (they work fine)
Conclusion:
No idea what to do to have my disks working again.
Any help is welcome
System:
Raspberry pi 4b 4GB
5.10.60-v7l+
armv7l
fdisk -l:
Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 8192 532479 524288 256M c W95 FAT32 (LBA)
/dev/mmcblk0p2 532480 31116287 30583808 14.6G 83 Linux
Thank you
Yes, that is possible.
The files are not compressed and not encrypted. They are "just" de-duplicated on file level using symlinks.
I must have overlooked that.
Its a decent solution if it keeps the folder structure too.
I will look at it again than.
But I hope that urbackup will implement a easy way to reinstall the client and connect to the server and be recognized as the owner of the previews backup.
Thank you
Your backup is on your data disk. You can access it without UrBackup or OMV. Just connect the drive to a computer that can read the filesystem and you have access.
If you have OMV running, you can create a shared folder pointing at the backup and add the shared folder to smb. Then you can access your data even through smb. Same with nfs.
Or you use ssh to connect.
You suggesting to share the folder where I backup so to access it manually and not via urbackup?
(as i cannot remember that), are the files not compressed or else? I can just map the share and copy what I need?
Than delete everything and start a new backup strategy?
Alles anzeigenRestoring files is really easy. Individual files can be restored from the GUI.
Complete folders or the complete backup can be restored e.g. using rsync or making the folders available via smb.
To restore an image there is a CD image that supports you. But I have not tried that.
This is the old post here in OMV:
Alles anzeigenRestoring files is really easy. Individual files can be restored from the GUI.
Complete folders or the complete backup can be restored e.g. using rsync or making the folders available via smb.
To restore an image there is a CD image that supports you. But I have not tried that.
true, but my issue is not when all is up and running and urbackup is configured.
The issue comes if I now decided to format my laptop.. (or my SSD dies and I buy a new one) and reinstall linux and urbackup and at this point I want to connect to my old file backup.
Thats when things go wrong
Were your experiences with file backup or image backup?
Actually both. But both are under same user.
Its now a while ago... my memories are fading away...
I used to have files backup for possible restoring possible deleted files. And images in case I had to restore the all laptop.
I did, quite a while ago I was very busy with it.
I found it a very good and powerful too.
Here is my issue and why I dropped it for now:
Once the system is setup for backup. And I even do a snapshot of the system. All is in order and safe for rescue.
Here comes the issue: If a rescue is needed, is not just an installation and restore the backup using some sort of password ..
The urbackup requires a more elaborate configuration (I think some code that gave at the beginning and mostly I neglected to save somewhere. And even that, years later I probably forgot where I have the codes) to let it understand I am the same user that wants to restore the system or just the files I want.
Moreover, since my OMV is on a enclosure with a raspberry pi 4b and the ssd is mounted inside, I cant just take the disk and use the image as a virtual disk ...
At the end I found it too complicated and moved on to other alternatives.
Never found a very nice backup system as urbackup... but I could get my data back much easier.
When I post about it, no one really had a complete live scenario to restore the backup and I figure theory if great but practice is what it counts after all.
Surely I am wrong somewhere.
I did try to reinstall urbackup on a new system, and I failed to establish connection to my previews backup.
Sorry for long answer. I felt I needed to expand my response.
I will look at it and I hope they made it easier for users to reconnect a backup on a new system
Alles anzeigenThis will be a long post, as I'll try to be thorough... The way my backups work, I don't feel I need version backups remotely, so I've never tried to do it with Syncthing (although the documentation says it works, if it doesn't there's other options like rsnapshot, etc)...
My setup might seem a little confusing, and while it's probably not as good as rsnapshot or another versioned backup. Some might even say it isn't "automated" enough, and I'd agree... but that is partly why I like it. I've used this setup for several years and it works well for me. It has saved me from data loss a couple times... which is the whole point of a backup.
In my scenario:
Drives A & B, are my "working" drives. Basically where all my data is, container data is here, new stuff is added and modified, serves data to clients, etc.
Drives C & D are an "internal backup". These drives are in the same physical server as A & B. They sync once a day via rsync. This rsync job has the delete trigger turned off. So if I'm making lots of changes, etc.. to A & B, it is not uncommon to have multiple copies of something on drives C & D. Usually this is simple stuff. For instance, I just rewrote all my docker-compose files with symlinks about a month ago.. right now I have 2-3 copies of those on C & D, since it was a process that took me a couple days due to work, testing my new compose files, etc.. Occasionally if I delete a movie or entire TV series from A & B, it will remain on C&D for a while and the file system sizes will be a bit more noticeably different from A & B... but I would say for the most part, this is pretty uncommon for my work flow. Short of being used as this internal backup, C & D are not used in any service, etc.
My remote backup, we'll call drives E & F, sync to drives C & D with Syncthing. Since drives C & D are my internal backup, again it's not uncommon to have multiple copies of data that has changed, on my Syncthing backup.
Once a month or so, I'll go through and check my differences between A & B and C & D. As I said, for my data flow, most of the time the differences are pretty subtle and I usually know what they are (in the case of my docker-compose files right now). Once I check the backup drives and I know nothing crazy is going on (ie, it's synced multiple copies of a file that I know I didn't change, this would mean an issue with Drives A or B). Once I've verified all is in order, I enable the delete trigger on the rsync job, and run it manually. This brings A & B and C & D completely in sync. In turn, once it finishes, C & D will then sync to E & F, bringing everything 100% in sync. Once it's all done, I disable the delete trigger on my rsync job again.
In the event of an accidentally deleted file, etc.. from A & B, I can just SSH my system, and copy the file from C & D. In the event I've accidentally lost something on A & B, and C & D failed for some reason, the data should still be on E & F. In the event of a major issue (Fire, Flood, etc.) where I lose A, B, C, and D... My data should be reasonably up to date remotely on E and F.
Beyond that, the only thing I really do is check my rsync logs every night, and make sure nothing super crazy happened between A & B and C & D.
Hope that makes sense.
Nice!
unfortunately in my case I am looking for backing up my laptop. I dont have space for a C&D drive.
But I like your configurations.
Alles anzeigenI'm not sure why you mounted the share locally, you shouldn't need to. Like I said, I last tested this months ago, but my instructions seem to confirm this...
If you were targeting /mnt/whatever , then you were doing your own thing and not what I suggested.. The way you're trying to do this, you wouldn't even need to run an rsync server.. It *should* work just the same.. however since it didn't, I can only come to a couple reasons why:
1. You entered the path wrong
2. The share was not mounted.
That's how it ended up on your OS disk, but again, it wasn't necessary to do that. If you done what I put above and it could not reach the server, you would just get an error it could not connect to the rsync server, and that's it.
I had a break about this. I will soon pick it up again. I though I did the right thing.. but something does not got the way it should. I will review it.
Ps,
how about your synchthing tests?
Do you also agree that syncthing cant do incremental and differential backup?
I am puzzled by the versioning in syncthing.. I need to read more about it if that can act as sort of differential ...
https://docs.syncthing.net/users/versioning.html
Its interesting... Staggered File Versioning might even do the trick.
But I think that will make a full copy and all the daily updates are not incremental... not sure...
Its getting interesting.
I'm gonna have to look at grsync again to make sure, but the rsync share should not have to be mounted on the client. Like I said, I don't really use rsync for clients/server, and use it to back up two servers... and they definitely do not have remote mounts on them.
Well... al this time I tried out ... I got to the conclusion that grsync is not what I need.
For first: after manually mounting the share, I configured grsync to save the files in /mnt/mounted_share
I run a test and had no errors. so I started the backup... it was super fast... too fast!
I check.. yes, my all /home folder was backed up in the /mnt/mounted_share BUT it was only local and I run out of SSD space!!!!!
I though a mounted share would point to the network share... that did not happen!
looks like I dont know anymore what linux does when mounting a share!!
I have enough for today.. will try more in the future... this is not working as I want yet.
You kind of got me curious, so I just set it up my laptop and server (I tested it a long time ago, but when I rebuild my servers next month I might actually go with syncthing for remote backups). It's actually quite easy. Just going through the options, it does have versioning. It looks like you can set up versioning in the client, but I've not messed with it yet (I've got it set to none at the moment)...
It's really not difficult to set up, the one thing that got me.. since the client is also configured via a webUI... and it looks very similar to the Server side webUI, I caught myself doing things on the client side when I thought I was on the server side (and vice versa). Just resulted in unnecessary confusion
My suggestion.. the server and client each have a couple themes for their web interface... Set your server to a "Dark" or "Light" theme, then set the client to the opposite.
Hi KM0201,
I use syncthing a lot. I know how to configure it.
What I dont think it does is incremental backup; copying only the modified/changed/deleted files...
If I am not wrong, versioning means will keep a number of version of the same backup for defined time. (correct me if I am wrong).
That is of course, a solution.. provided we have enough space on disk.
In all case, is a very valuable solution.
You can also choose who sends, who receive or if should be both ways. Is a very dynamic solution, I agree.
I still like to give a go to rsync.
But I do see coming up issues: (with rsync)
the remote folder has to be mounted at all time. Means that if the mount fails, no backup take place.
Recently I had issues to permanently mount the share in fstab (cant figure out why just yet).
Syncthing actually works almost in any scenario; with/out VPN or local LAN and is encrypted transfer at all time. With latest version you can even choose to have a password for the transferred files...
More and more I write about it and I also think is probably a better solution...
Its that I wanted to try the built in rsync server in OMV5...
...
To be seen...
I think you can set the syncthing client up that way... now you're making me curious.
I believe syncthing has versioning system. But I dont think is incremental... not sure but I guess you end up with several copy of backup when using versioning...
Also for what it's worth.. it might be easier to set up a syncthing server in docker, install a syncthing client on your OS, and then set it up that way. It would almost certainly be easier to automate.
that is actually my next step if rsync is getting too messy to set it up.
the only thing is that I like rsync for the way it handles the backup and the daily/weekly/monthly backup and the simple way to restore a single file (if I not confusing with timeshift).
I will give a shot with rsync and see...
Thanks anyway.
Alles anzeigenHonestly, that's been quite a while ago and I don't really remember. I didn't really mess with it, beyond what I did above.
But a few things I recollect..
1. The shared folder is mounted on the client machine... and that is what you point grsync... Not sure if that answers your question, but if you're required to use a password to mount it, then I don't think there is any reason for grsync to also require a password.
2. Blank I highly doubt. You could probably set it to 192.168.1.0/24 (assuming your router is 192.168.1.1).. That would probably allow any client on the network to write to it.
3. Rsync shares the path, so it would be exactly what I put.. You would not need the absolute path here.
ok. thanks
Alles anzeigenOK, sorry got tied up for a second and it was a little harder than what i thought... but really not difficult once you get the hang of it (I haven't quite figured out how to automate this, but this will at least allow you to do manual backups.. you'll just have to do some further reading)
First, on the OMV side.
Create a shared folder where you want your home backed up (make sure you remember the name of this folder)
Go to rsync in the webUI
Click the Server tab
Click Settings and enable the rsync server.
Click Modules
Add Module
Choose the shared folder you just created, and name the job
I used an OMV user/users group... if you don't have a user on OMV, create one.. that's all I can suggest as I'm not sure this will work w/o a user.
Scroll down to where it says host allowed, and put in your mint local IP address
Save and apply changes to the module.
On Mint
You might want to start with a smaller folder than your entire home folder to make sure this works how you want..... but it will work either way.
sudo apt install grsync
Open grsync from the Mint menu
Click Advanced
The top path, that is the path on our laptop you want to backup (/home/something/ I'm assuming)
The bottom path, is the path to your rsync folder on OMV.. so it will look like this
rsync://username@serverip:/shared_folder_name (the one you created on OMV)
Click the blue button at the top... This is a "dry run".. it's not going to sync anything but will show you what it would have done (or if there's an error, it will show it)
Assuming all goes OK, and you want to sync, click the gears beside the blue i button, and that will actually run the job.
Once you're all done and you've got it running how you want, click the Green + button, and name the profile for the job, so it will be available next time you start grsync
Thanks for this guide. I am now busy setting it up as well.
I do have extra questions:
- can I use a sharefolder that actually uses a user with password (not public)?
- in Hosts allow you mentioned to put the client ip address. if that is not static, can I leave the blank? The description mention to leave it blank for default settings. I search the OMV doc and I cant find a description of what are the default settings. or can I use /24 to set a subnet?
- when setting the shared folder path in client, I assume is the absolute path, correct?
Thanks
I'm sure there is a way, just not sure what it is....
As for avoiding the problem in the future... use the webUI to manage folders and not the command line or SMB unless you are 100% sure what you are doing.. would be the lesson I would think
I kind of have an idea how to manage folders in linux. But surely not so much when it involved OMV.
I also think there must be a possibility to solve that issue.
Beside all the test I made to solve the issue that lead me to confusion, originally I simply moved a subfolder to a an other folder (therefore a subfolder of an other folder). And when trying to delete and restart my folders structure, I was not able to.
That does not relate (at first) to my CLI clean up!
I will try to open a new post with a more simple request that involved only what to do when moving foldered using GUI and trying to delete them afterwords.
First I will reproduce the issue to make sure I have a good case to post.
After testing, I will format and restart.
Thanks anyway for trying to help.
Regards,
do I have to reformat the drive?
It is an option as I am at the beginning. But I would love to know if there is a way to fix the issue in case this situation would reproduce again later in time when the NAS is fully oprational.
Thanks
if that is the case, did not work:
The configuration object 'conf.system.sharedfolder' is not unique. An object with the property 'name' and value 'user_name' already exists.
in this case, it complains of the Name column using same name as the one I cannot delete.
So I changed the Name to user_name1 for the name column. But I recreated the share name user_name as the original one: /public/user_name
Successfully created and applied.
Than deleted the public shared folder with its subfolders and contents. Successfully deleted.
But the one I need to delete is still not possible to be deleted. Delete button is gray out.
I must admit that at this point, I am getting confused.. I did many trys and tests...
Anyway to be able to wipe out all shared folders and starts from beginning?
It is a new installation