Posts by dantebarba

    The omv version doesn't matter. Your grub install is not good. There a many reasons why that could happen. Try booting a debian disk and repairing grub.

    You were right, the grub install was not ok and I had to use the debian rescue cd. Now it's booting again!. Thanks very much

    What I don't understand is why there is no possibility to have a full working backup using fsarchiver. What am I missing? This whole step seems too convoluted.

    Same here.

    It's impossible to restore from fsarchiver. Even with same disk size in my case. Tried everything with no luck, that includes removing all partitions from my boot device and dding the grubpart file.

    The omv version doesn't matter. Your grub install is not good. There a many reasons why that could happen. Try booting a debian disk and repairing grub.

    I'm restoring from a backup were GRUB worked fine, shouldn't fsarchiver backup GRUB too?. It wasn't in a separate partition. Can I repair grub using the OMV iso?

    Yes I was referring to the backup plugin.

    Unfortunately I've made a mistake and everything is broken now. I was cleaning up the temp directory and I saw the fsarchiver temp files there, and did a rm -rf on them. BAD IDEA THEY WERE HARDLINKS. So basically I destroyed my OMV instance. I'm now trying to restore it from backup (I have my backup in my raid array) using this guide:

    How to restore OMV 4.X from backup-plugin to system SSD

    I added a step on which I format the ext4 partition with makefs.

    The issue is that after restoring using that guide I'm getting blinking cursor. I'm using system rescue cd v7.01 to restore.

    Fortunately the /mnt was mounted as readonly so nothing from the mount was deleted, and the raid array was protected because it was in use and the rm command skipped it. A miracle.

    Can't see all of the arguments to see if it is two runs doing the same thing. Are you using the backup plugin to do this I assume? How often is it scheduled for? Hard to say what the problem is. You may have to kill the runs and delete the file it was creating.

    I am using the default fsarchiver plugin from omv.

    Anyway I may have found the issue. I didn't exclude /mnt and I was mounting a gsuite folder. 😊

    So today I had a notification that my NAS storage was up to 85% used space. Thats weird because I usually have 50% used (2TB) so I checked with ncdu to find the culpit. Well, it was a fsarchiver backup file that was 1.2 tb, when it should be 20 gb. I tried to delete the file but to my surprise it was locked, so I did htop and found this:

    Running time 7hs.

    Whats happening?

    It's healthy yes, maybe it's not up to the task.

    About the backup: yeah I'll probably end up plugging a usb once in a while and doing an autobackup when the usb is detected, something like a cold backup.

    The issue is that I do have a UPS. Maybe the min voltage is too low and it didn't kick in, I'm looking to change that now. But clearly the UPS failed to manage this surge properly.

    If someone knows how to change the LOTRANS value from an APC UPS I will be grateful.

    Ill probably upload at least the last backup to Gsuite, unfortunately my 5 Mbps upload is quite poor and it takes time.

    Today we had a power surge at my house. Usually I don't care because I have an UPS that handles it. My apsupsd was programmed to do a soft shutdown at 80% remaining battery. Unfortunately this time something went bad. The surge was strange, like a power drop, my work computer (which is not connected to the UPS) suddenly restarted, it was like a burst. To my surprise I receive a monitoring alert that my NAS was down, and when I check it was shutdown, which is strange because soft shutdown shouldn't have been triggered in this case.

    I started the NAS again and it never booted, it stays in a black screen with the cursor blinking. I removed my RAID1 disks and my SSD which is mounted in a USB case, and tried to recover the partition using my work computer, with no luck at all.

    Sadly, my OMV backups are inside the RAID1 (FSarchiver backups) so I mounted the raid1 using systemrescuecd, so I can recover the backup and dump it to the SSD.

    I recovered the backup successfully, but then again black screen, it didn't solve the issue at all. I'm now reinstalling OMV from scratch, but really it's puzzling, maybe my SSD has been damaged?.

    I will keep the backups and probably try to restore them later, but really does anyone had a similar experience?.


    +Flash memory plugin reinstalled
    +Replaced OVM monit with NetData image.
    +CPU average below 5% and IO Wait below 10% when idling.

    Tweaking + Deleting Sonarr solved the issue. Now it's time to look why Sonarr was causing this.

    Found them I think. Sonarr and Radarr where constantly spiking up to 30% CPU usage even when they are not configured and they have empty databases. Stopped their containers and so far so good, after 10 hours it's working just fine and crons are not missing the deadline. Besides I searched on github and found this:

    I'll try to look up for a solution. I have Sonarr and Radarr on my VPS too but I'm not observing this behaviour. Probably it's happening anyway but I can't detect it since I don't have any monitoring app. I will probably install Prometheus on my VPS so I can check if this is happening too.

    If I don't see the issue coming back up again I will reinstall OMV-Flashmemory plugin as it looks like it reduces a lot the idle IO Wait.

    Thanks for your help tkaiser.

    Well I have the Flash Memory plugin installed. Once I click on "Reset" inside the plugin options, the IO Wait drops and returns to normal. Any idea why?. Should I keep using this plugin?

    Im currently uninstalling OMV-Flashmemory to test if issue goes away.

    Hello folks.

    Last time I was making a backup from my Desktop PC to my NAS, and suddenly after 10GB copied more or less Samba freezes and locks the filesystem. I thought it was a Samba issue so I switched to SCP (WinSCP) and same thing happened, filesystem locked (can't ls, cant stat any file, can't do anything on the specific folder that was target on the copy process). Due to lack of time and the fact that I had a spare 1TB external drive I ended up rsyncing everything from the external drive to my NAS, this process ended successfully after copying 200GB.

    Anyway, today I reviewed some stats on the server and I found out that Wait I/O times are quite high. I don't know if this is common, or just a bug in the monitoring software, but got my attention obviously. I did an iotop and didn't find anything suspicious.

    Maybe this issue is related to the filesystem locks, maybe not, but the NAS doesn't feel so stable.

    NAS specs:

    - I5 4440
    - 8 GB of RAM
    - 1TB software RAID1
    - OMV running from USB stick with OMV Flash Memory plugin installed
    - Ethernet connection using Powerline.