If there is a Web GUI notification saying a reboot is required, and that reboot is done via a ssh session rather than the GUI, the notification saying a reboot is required remains until such time as a reboot is triggered via the Web GUI.
Posts by RobLatour
-
-
-
Perhaps a silly question, but is there any difference between scheduling a reboot via system - scheduled tasks vs system - power management - scheduled tasks?
I had scheduled it under system - power management - scheduled task, but noticed it didn't appear when I looked at system - scheduled tasks.
Also, if I do schedule under system - scheduled tasks, for the command do I just enter "reboot" (without the quotes) or do I need to enter "sudo reboot" (without the quotes)?
-
if you log on to the OMV web interface as admin, then on the dashboard you should see the uptime. If you don't see it, you can click on the user icon (near top right), dashboard, and check the uptime option.
-
That's too bad, but thanks for confirming.
-
If System - Monitoring is enabled, is the data associated with that monitoring read/written to disk - or is it only maintained in memory (RAM)?
-
+1 on:
sudo mdadm --readwrite /dev/md0
for me too
-
Quote
There is no simpler method of backing up OMV's OS than cloning thumbdrives or SD-cards.
So my NAS is in the basement. To backup the OS
ssh into it
backupRoutine/createBackup.sh
(minimize the window - an hour later open the window up again)
exit
job done.
... or of course, just do the backup in a cron job.
-
I should also mention, here are my iPerf3 results over my 2.5 GB home network (with IP Addresses redacted):
Code
Display More----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- Accepted connection from #.#.#.#, port 53648 [ 5] local 192.168.1.10 port 5201 connected to #.#.#.# port 53658 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 266 MBytes 2.23 Gbits/sec [ 5] 1.00-2.01 sec 276 MBytes 2.29 Gbits/sec [ 5] 2.01-3.01 sec 278 MBytes 2.33 Gbits/sec [ 5] 3.01-4.01 sec 277 MBytes 2.33 Gbits/sec [ 5] 4.01-5.00 sec 275 MBytes 2.33 Gbits/sec [ 5] 5.00-6.01 sec 282 MBytes 2.33 Gbits/sec [ 5] 6.01-7.01 sec 278 MBytes 2.33 Gbits/sec [ 5] 7.01-8.01 sec 277 MBytes 2.33 Gbits/sec [ 5] 8.01-9.01 sec 278 MBytes 2.33 Gbits/sec [ 5] 9.01-10.00 sec 275 MBytes 2.33 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 2.70 GBytes 2.32 Gbits/sec receiver
-
Here are all the components I used in creating my DIY NAS running OMV as inspired by Don of Novaspirit's Youtube video found here.
Of note, none of the links below are affiliate links, they are just posted below for reference.
Inside the case:
- Raspberry Pi 5 8GB
- Raspberry Pi 5 Active Cooler
- Raspberry M.2 Hat
- M.2 PCIE Nvme SATA6G 2 Port SATA III 6Gb/S M.2 22X42 M Key And B Key Controller Adapter Card
Outside the case:
- For the HDDs:
- 2 x 6TB HDDs (paired as a RAID 1 array)
- 2 x SATA Cables III
- 1 x Power cable for 2 SATA drives with a 5.5x2.5mm female jack
- 1 x 12V 10A power supply with a 5.5x2.5mm male jack
- For the SSD
- 1 x 2TB SSD in an enclosure with a USB C connection
- 1 x USB A to USB C data cable
- For Ethernet
- For the Pi
- For the Operating System and NAS Software
For the case, I re-mixed a 3D printable model for it and published it here
Beyond that, I have a poor man's HDD cage (here) on order and am looking forward to finishing it off.
-
I recently setup OMV 7.
When I did that, I setup two 6TB HDDs as mirrored.
That worked fine at the time.
However in the last couple days I started getting emails that said "A DegradedArray event had been detected on md device /dev/md0."
In Storage - Disks - I see all my disks listed as expected - including the two 6TB disks /dev/sdb and /dev/sdd
In Storage - Smart - Devices - I see all my disks listed as expected (including the two 6TB disks); the two 6 TB disks show up as 'Good'
In Storage - Multiple Devices - I see:
/dev/md0 clean, degraded Mirror 5.46 TiB /dev/sdb
(there is no reference to /dev/sdd)
this is what show details shows:
Code
Display MoreVersion : 1.2 Creation Time : Mon Jul 22 16:59:28 2024 Raid Level : raid1 Array Size : 5860390464 (5.46 TiB 6.00 TB) Used Dev Size : 5860390464 (5.46 TiB 6.00 TB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Mon Jul 29 13:04:13 2024 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : omv:0 (local to host omv) UUID : 87029988:ddbef714:78333d1a:23ddae4c Events : 10543 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb - 0 0 1 removed
I did some reading, and I thought I would just remove the mirrored disk from In Storage - Multiple Devices, wipe /dev/sdd, and recreate the mirror.
However, when I click on the mirror drive in Storage - Multiple Devices both the remove and delete icons remains greyed out.
I did some more reading, and and found a post that recommended accessing the command line via SSH for root and issuing the commands
mdadm --stop /dev/md0
and
mdadm --add /dev/md0 /dev/sdd
I tried that but got a message but with the mdadm --stop /dev/md0 command that said:
"mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
Any suggestions?
Edit:
I also installed the plugin resetparms
it said there was a smb share
I deleted the share from SMB - Shares and Storage - Shares and applied the changes
resetparms is no longer showing any shares for the drive
I rebooted
I still can not remove md0 from the gui or the command prompt as root
Edit:
After attempting to clean the drive that wasn't working and having that fail too I took it out of my setup and formatted separately using gparted on another machine. I then put it back into my omv setup and it was still not responsive. As a Hail Mary, I powered everything down, swapped out the (from China) SATA cable with another one (from an old ASUS motherboard), powered everything back up, and guess what - it worked! I could clean the drive from OMV, and then I tried a recovery from the Multi Device Plugin and it started to recover the mirror volume - currently at 1% and I suspect it will take a while.
If anyone spent time thinking about this post, my apologies.
-
ryecoaaron Understood.
However, over the years if I've learned that if something can happen, but shouldn't, it will.
Having that said, if someday you do decide to make changes to the plugin to put the drive into read only mode and block the Web Interface - to save you going down the same rabbit hole, I've found that stopping or disabling ngnix doesn't work, as omv just restarts the service. Also, I tried masking the ngnix service so that omv doesn't restart it. While this works, and even if I do this after the drive is put into ro mode, when the backup is run and then the image is restored for some unknown (to me) reason everything works fine other than ngnix comes up masked. Its an easy one command line fix to unmask it after the system has been restored, but it complicates the restore procedure. According, my shell blocks firewall access to the Web Interface during the backup rather than stopping/disabling/masking the service as the changes to the firewall don't persist through backup and the restore process.
-
It's good to know, but really not documented in a way that any of that is clear.
If you install the plugin there is only one line in it, in very small print, that reads:
dd full disk - use dd to clone the entire drive to a compressed image file
However, fair enough.
Beyond that In the OMV documentation, if you search for backup you only find two relevant entries:
One here: https://docs.openmediavault.org/en/stable/faq.html which says:
QuoteCan I backup or restore an existing openmediavault configuration?
There is no regular backup/restore procedure, but yes, in some way: keep the file
/etc/openmediavault/config.xml
for references purposes if the option is to go for a clean re-install.and one here, that says:
Quote- USB Backup: Backup internal data to external disks on scheduled basis or on plug drive event.
For the restore, the only thing I can find (via a Google search) is the post I mentioned above, and if it is really as straight forward as what I've documented for my shell for the restore then why not just document that?
Also, thanks for the link to the shell, I looked through it and you are right it doesn't put the os in read only mode during the backup. However, it also doesn't disable the OMV Web Interface. The reason I do both in my shell is avoid corruption, with the os drive otherwise being written to while it is being backuped up (for example with unattended updates) and confusion if someone happens to be in changing things via the Web Interface at the time of the backup.
Having that said, the plugin does offer more options in term of backup methods and I with that does come additional complexity. Additionally, the plug auto can auto delete old backups which is a plus as well.
In any case, thanks for your comments they were quite helpful.
-
ryecoaaron - thank you.
Yes, I did see that plugin in the list of plugins before I wrote the shell, but couldn't find any supporting documentation on how it worked or what options there were. Also, at that time, I also found this [How-To] Restore OMV system backup made with openmediavault-backup plugin which, to me, looked a bit scary and was what I meant by "while I did find various backup solutions I didn't find one that I considered easy to use in terms of both backup and restore".
Having that said, I feel the solution I'm sharing has a much simpler restore process than what is described in the link above and am hoping it will be of use to others.
-
First, let me say I am new to OMV and have been quite impressed with it so far.
Having that said, while I did find various backup solutions I didn't find one that I considered easy to use in terms of both backup and restore; especially for how I have my system setup up - basically an 'entry level' OMV setup running on a Raspberry Pi loaded from a 32 GB USB drive and managing a SSD and a couple HDDs. That's not to say that there isn't an existing backup and restore solution for what I needed, just that if there is I didn't find it.
In any case, I put in a little time and came up with something that I hope will be of use to others - especially those using running off of a 32 or 64 GB USB drive.
Basically, it's a shell program that backs up the OS + OVM USB drive to an image file and optionally compresses it to save a significant amount of space. From there the image file, or the compressed image file, can be easily flashed to a replacement USB drive using, for example, the Raspberry Pi Imager tool.
If you're interested, I've posted the solution on Github as open source under the MIT license.
Also posted on Github is a write up describing of how the program works, testing and performance results, prerequisites, setup instructions, how to manually and automatically run the backup, what do do when the backup is finished, and how to restore from the backup.
I estimate the setup process should take about 5 minutes.
If you're interested, the project can be found here:
GitHub - roblatour/OMVBackup: Linux shell to backup Open Media VaultLinux shell to backup Open Media Vault. Contribute to roblatour/OMVBackup development by creating an account on GitHub.github.comComments are welcome.
-
Would appreciate some help with setup instructions for OMV with bcache.
Background
Inspired by this video: https://www.youtube.com/watch?v=dI68NoGYl_I&t=1s
Here is the Hardware I have / am waiting for / plan to get:
Raspberry Pi 5 with the Raspberry Pi M.2 hat
A M2 PCIE Nvme SATA6G 2 Port SATA III 6Gb/S M.2 Controller Adapter Card (i.e. plugs into the Pi M.2 hat and provides tow SATA Ports)
Power cables for 2 SATA drives
2x 16 TB HDDs
1x 2 TB USB connected SSD
1x 2.5 GB NIC that plugs into a USB port (ASUS USB-C2500 USB-C2500)
Power supplies for the Pi and SATA drives
Idea is to connect the HDDs via the M.2 hat, and the 2 TB SSD and 2.5GB NIC via the 3.0 USB ports on the Pi.
OMV will load from a USB drive also plugged into the Pi.
I have one HDD and the SSD but will not get the second HDD until a little later so these are not configured yet.
Plus I am also still waiting for the M.2 Adapter card.
Goal
Desire is to have the two HDDs configured as RAID 1, but to also use the SSD as a cache so I can capitalize on the 2.5GB speeds of my home network.
Work to date
I've got OMV and the 2.5 GB NIC setup on the Pi and they appear to be working fine (running Raspberry Pi OS).
Also, after researching it, it appears I can use bcache as a way to leverage the SSD as a cache in front of the HDDs.
What I'm looking for are the set up instructions for bache (sudo commands etc.) and am hoping someone would be willing to share those.
( Yes, I've google for them but haven't found all that I need. )
Also, I asked ChatGBT for the setup instructions:
QuoteI want to set up an OMV NAS. I have two HDDs and one SSD. I will be pairing the HDDs using RAID 1. Also, I want to use bcache with the SSD with the goal of it cached data that is read / written from the HDDs. Also I want to ensure write-back is used, and that the eviction policy keeps as little, or more ideally no data on the SSD after it is written to the HDDs. That is to say the SSD should be used as a cache when writing to the HDDs but all data should be written fully to the HDD as soon as possible. For reading I would like bcache to keep what data it feels would be beneficial in the cache for as long as it determinse is approriate. Please provide the bcache setup instructions for this.
Also, do I designate the two HDDs as tied together for RAID one in OMV before or after I issue the setup commands for bcache?
This is what I got back:
Quote# Install bcache tools
sudo apt-get update
sudo apt-get install bcache-tools
# Prepare SSD and HDDs
sudo make-bcache -C /dev/sda
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
sudo make-bcache -B /dev/md0
# Register bcache devices
sudo echo /dev/sda > /sys/fs/bcache/register
sudo echo /dev/md0 > /sys/fs/bcache/register
# Attach cache to backing device
sudo echo /dev/bcache0 > /sys/block/bcache1/bcache/attach
# Configure write-back mode and eviction policy
sudo echo writeback > /sys/block/bcache1/bcache/cache_mode
sudo echo 100 > /sys/block/bcache1/bcache/writeback_percent
sudo echo 0 > /sys/block/bcache1/bcache/cache_read_percent
# Format and mount bcache device
sudo mkfs.ext4 /dev/bcache1
sudo mkdir /mnt/nas
sudo mount /dev/bcache1 /mnt/nas
# Configure OMV to use bcache device
# Add /dev/bcache1 to /etc/fstab
sudo nano /etc/fstab
# Add the following line
/dev/bcache1 /mnt/nas ext4 defaults 0 0
I know sometimes ChatGBT hallucinates (i.e. makes stuff up) and I called it on the use of the command sudo echo 0 > /sys/block/bcache1/bcache/cache_read_percent as I could find no documentation on it. ChatGBT then apologized and gave me the same commands without the reference to sudo echo 0 > /sys/block/bcache1/bcache/cache_read_percent
My questions
Having that said, I would appreciate a human review of the suggested configuration above. Is it missing anything?
Also, is the line added to /etc/fstab sufficient for OMV or do I need to configure the drives somehow in the OMV GUI?
With thanks.