Clean install fixed it...
Beiträge von smartin
-
-
No. There's a change from OMV5 to OMV6 on the update managment that when there's any updates, it will install all of them.
You can select just one by one ONLY to view the changelog.
Maybe you haven't set the "allow root login..." on the SSH service settings?!?
And to change the admin password, you can do it on the GUI:
Click the dented wheel and select "Change Password..."
Ah, yes, root access is enabled.
S
-
Install always installs all updates. Being able to select individual updates caused more problems than it fixed.
Do you have root disabled in the ssh tab?
You can do that in the web interface. Gear in the top right corner -> Change Password.
I stumbled over the admin password change. Thanks
I can't see an ssh tab. Sorry... Found it!. It was enabled.
S
-
Hi,
With help from you guys I have set up a OMV5 system before and it's running fine (touch wood...) but now I'm trying to set up a V6...
I have a couple of very basic questions...
1) Updates. In V5 I could tick a box at the top of the first column to select all available updates and then "Install". Do I have to do one update at a time in V6...? There's no "Select all" checkbox...
2) I can't seem to log in over OSX terminal as root. I get constant "Permission denied. Please try again" I'm sure I have the password right. Do I have to enable ssh somewhere? I want to change the admin password for the webgui...
Thanks, as always!
S
-
I get
Code
Alles anzeigenroot@openmediavault:~# ls -lah /srv/* /srv/dev-disk-by-label-Backup: total 44K drwxr-xr-x 4 root root 4.0K Nov 11 2020 . drwxr-xr-x 8 root root 4.0K Nov 11 2020 .. -rw------- 1 root root 6.0K Jun 23 08:14 aquota.group -rw------- 1 root root 7.0K Jun 23 08:14 aquota.user drwxrwsrwx+ 3 root users 4.0K Nov 11 2020 Backups drwx------ 2 root root 16K Nov 11 2020 lost+found /srv/dev-disk-by-label-OMVBackup: total 8.0K drwxrwxrwx 2 root root 4.0K Nov 11 2020 . drwxr-xr-x 8 root root 4.0K Nov 11 2020 .. /srv/dev-disk-by-label-zulu: total 56K drwxr-xr-x 7 root root 4.0K Aug 18 2021 . drwxr-xr-x 8 root root 4.0K Nov 11 2020 .. -rw------- 1 root root 6.0K Jun 23 08:14 aquota.group -rw------- 1 root root 7.0K Jun 23 08:14 aquota.user drwxrwsrwx 5 root users 4.0K May 17 15:27 FamShare drwxrwsrwx 6 root users 4.0K Jun 23 09:15 LorraineTM drwx------ 2 root root 16K Jun 20 2020 lost+found drwxrwsr-x 3 root users 4.0K Jan 12 13:20 Media drwxrws---+ 34 root users 4.0K Jun 23 10:07 simon /srv/ftp: total 12K drwxr-xr-x 2 ftp nogroup 4.0K Oct 23 2020 . drwxr-xr-x 8 root root 4.0K Nov 11 2020 .. -rw-r--r-- 1 root root 59 Oct 23 2020 welcome.msg /srv/pillar: total 16K drwxr-xr-x 3 root root 4.0K Apr 28 07:43 . drwxr-xr-x 8 root root 4.0K Nov 11 2020 .. drwxr-xr-x 2 root root 4.0K Apr 28 07:43 omv -rw-r--r-- 1 root root 866 Jan 20 18:42 top.sls /srv/salt: total 28K drwxr-xr-x 6 root root 4.0K Apr 28 07:43 . drwxr-xr-x 8 root root 4.0K Nov 11 2020 .. drwxr-xr-x 2 root root 4.0K Apr 28 07:43 _modules drwxr-xr-x 8 root root 4.0K Jun 19 2020 omv drwxr-xr-x 2 root root 4.0K Apr 28 07:43 _runners drwxr-xr-x 2 root root 4.0K Apr 28 07:43 _states -rw-r--r-- 1 root root 866 Jan 20 18:42 top.sls root@openmediavault:~#
How do you suggest I do the backups? I don't need versioning as such just reliable copies. It's 99% image files btw...
S
-
How do you do your backups?
Does it pick up all of your RAID array?
Are you using compression?
Zoki,
I'm glad you asked... Would be good to have someone sensible have a look...
See attachments. I'm using Rsync to back my whole file system up to a single external drive. Hopefully I have two incremental updated and then one update which clones the file system at a longer interval.
Hope it makes sense. I don't want to trigger the scripts if there's something wrong with my main file system...
S
-
Hmmm... Did a general Google search and I'm guessing the red filesystem is to say that my disks are filling up...
Would love to know why the backup isn't the same size as the main system though...
S
-
Hi,
(I'm running OMV 5.6.26-1.)
I had a bit of a fright this morning...
I successfully created a folder on my SMB share on my RAID 5 setup.
I then saved a file to this folder but couldn't see it in the Finder on my Mac.
Logged in to OMV and noticed several things: The main file system is marked in red and my Backup is way smaller than the main file system, which can't be right. I also got a big error message on login. See attachments.
The big error message (and some minor ones) persisted after reboot of the OMV system. I then shot down the OMV box entirely and rebooted my Mac. When everything came back up again, the various error messages had gone but the file system is still marked in red and the backup size doesn't match.
I reset ACL permissions and the contents of my folder re-appeared.
The persisting issues are the file system marked in red and the size mis-match.
Does anyone know what's up with my setup...? Is it just that the file system is nearly full...? Why the size mis-match releative to the backup...?
This is the top section of my log. Full log attached:
Code
Alles anzeigenJun 23 00:00:00 openmediavault rsyslogd: [origin software="rsyslogd" swVersion="8.1901.0" x-pid="2956" x-info="https://www.rsyslog.com"] rsyslogd was HUPed Jun 23 00:00:00 openmediavault systemd[1]: logrotate.service: Succeeded. Jun 23 00:00:00 openmediavault systemd[1]: Started Rotate log files. Jun 23 00:00:01 openmediavault CRON[25479]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 00:09:00 openmediavault systemd[1]: Starting Clean php session files... Jun 23 00:09:00 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 00:09:00 openmediavault systemd[1]: Started Clean php session files. Jun 23 00:09:01 openmediavault CRON[25749]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 00:15:01 openmediavault CRON[25814]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 00:17:01 openmediavault CRON[25959]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 23 00:30:01 openmediavault CRON[26097]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 00:39:00 openmediavault systemd[1]: Starting Clean php session files... Jun 23 00:39:00 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 00:39:00 openmediavault systemd[1]: Started Clean php session files. Jun 23 00:39:01 openmediavault CRON[26375]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 00:45:01 openmediavault CRON[26439]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 01:00:01 openmediavault CRON[26720]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 01:09:00 openmediavault systemd[1]: Starting Clean php session files... Jun 23 01:09:01 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 01:09:01 openmediavault systemd[1]: Started Clean php session files. Jun 23 01:09:01 openmediavault CRON[26998]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 01:15:01 openmediavault CRON[27062]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 01:17:01 openmediavault CRON[27207]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 23 01:30:01 openmediavault CRON[27345]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 01:39:01 openmediavault CRON[27564]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 01:39:01 openmediavault systemd[1]: Starting Clean php session files... Jun 23 01:39:01 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 01:39:01 openmediavault systemd[1]: Started Clean php session files. Jun 23 01:45:01 openmediavault CRON[27687]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 02:00:01 openmediavault CRON[27963]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 02:09:01 openmediavault CRON[28181]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 02:09:01 openmediavault systemd[1]: Starting Clean php session files... Jun 23 02:09:01 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 02:09:01 openmediavault systemd[1]: Started Clean php session files. Jun 23 02:15:01 openmediavault CRON[28304]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 02:17:01 openmediavault CRON[28448]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 23 02:30:01 openmediavault CRON[28588]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 02:39:01 openmediavault CRON[28806]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 02:39:01 openmediavault systemd[1]: Starting Clean php session files... Jun 23 02:39:01 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 02:39:01 openmediavault systemd[1]: Started Clean php session files. Jun 23 02:45:01 openmediavault CRON[28929]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 03:00:01 openmediavault CRON[29210]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 03:09:01 openmediavault systemd[1]: Starting Clean php session files... Jun 23 03:09:01 openmediavault CRON[29442]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 03:09:02 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 03:09:02 openmediavault systemd[1]: Started Clean php session files. Jun 23 03:10:01 openmediavault CRON[29501]: (root) CMD (test -e /run/systemd/system || SERVICE_MODE=1 /sbin/e2scrub_all -A -r) Jun 23 03:15:01 openmediavault CRON[29555]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 03:17:01 openmediavault CRON[29700]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jun 23 03:30:01 openmediavault CRON[29839]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 03:39:01 openmediavault CRON[30058]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) Jun 23 03:39:01 openmediavault systemd[1]: Starting Clean php session files... Jun 23 03:39:01 openmediavault systemd[1]: phpsessionclean.service: Succeeded. Jun 23 03:39:01 openmediavault systemd[1]: Started Clean php session files. Jun 23 03:45:01 openmediavault CRON[30176]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 04:00:01 openmediavault CRON[30456]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1) Jun 23 04:09:01 openmediavault CRON[30675]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
-
Yep, wipe first
It seems to be "Recovering"...
Thank you so much! One day I'll be able to do this seamlessly...
S
-
-
The drive that you need to replace I take it it's not there in Raid Management -> Devices
I just shut down thinking I'd risk replacing the disk...
I booted up again and there are only three disks in Raid Management -> Devices now.
The faulty disk was sdb but a new disk is now sdb due to the restart...
Plough on and replace the disk?
S
-
The RAID is now saying "Clean, degraded"
Maybe that's to be expected and I just shut down and carry on...?
-
I also got the email:
This is an automatically generated mail message from mdadm
running on openmediavault
A Fail event had been detected on md device /dev/md0.
It could be related to component device /dev/sdb.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb[1](F) sda[4] sdd[5] sdc[2]
5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
bitmap: 0/15 pages [0KB], 65536KB chunk
unused devices: <none> -
That looks pretty good
geaves,
I'm getting a (to me) cryptic error message... See attachments.
I get this when I tick the box next to the faulty rive and hit Ok...
S
-
Use remove, then from the dialog select the drive you want to replace click OK and the drive will be failed then removed from the array
geaves,
Appreciate your help...
Just so I don't mess anything up unnecessarily...:
Use Remove.
Select the faulty drive and Click Ok, which will remove the drive from the array.
Shut down OMV.
Replace the disk.
Boot back up
Go to "Disks", find the new disk and "wipe"
Go to Raid Management and Rebuild
Is that the procedure?
S
-
Hi,
I'm using OMV 5.6.25-1.
Code
Alles anzeigenroot@openmediavault:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb[1] sda[4] sdd[5] sdc[2] 5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/15 pages [0KB], 65536KB chunk unused devices: <none> root@openmediavault:~# blkid /dev/sda: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="acaec5ef-d304-a129-3754-9605267dcbdf" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member" /dev/sde1: UUID="173d1141-65e9-4ee1-ae31-b73d34f7b2cf" TYPE="ext4" PARTUUID="9d8e1096-01" /dev/sde5: UUID="6947d5ca-f259-4fe7-be54-5b945620213c" TYPE="swap" PARTUUID="9d8e1096-05" /dev/sdd: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="5eca6dcc-136e-506b-d09c-13fde444b570" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member" /dev/sdc: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="d49ed9a5-6400-f405-ea4d-0601f2e60642" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member" /dev/sdb: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="5667c0d5-4cec-a644-36a3-e641ec176a46" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member" /dev/sdf1: LABEL="Backup" UUID="83ed8d9d-e2f7-4e64-bfc8-8fe26f404112" TYPE="ext4" PARTUUID="4f80a638-fd4a-44e9-851a-3c9575507f12" /dev/md0: LABEL="zulu" UUID="4c0e7ba7-40f6-47cc-86e6-75faeccc7212" TYPE="ext4" root@openmediavault:~# fdisk -l | grep "Disk " Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: TOSHIBA HDWD120 Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors Disk model: CT120BX500SSD1 Disk identifier: 0x9d8e1096 Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: TOSHIBA HDWD120 Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: SAMSUNG HD204UI Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Hitachi HDS72302 Disk /dev/sdf: 5.5 TiB, 6001175125504 bytes, 11721045167 sectors Disk model: Expansion Desk Disk identifier: B27D326A-00A1-47B0-B2D5-732A08CA3BEA Disk /dev/md0: 5.5 TiB, 6000790732800 bytes, 11720294400 sectors root@openmediavault:~# cat /etc/mdadm/mdadm.conf # This file is auto-generated by openmediavault (https://www.openmediavault.org) # WARNING: Do not edit this file, your changes will get lost. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR omvnotifications@xx.co.uk MAILFROM root # definitions of existing MD arrays ARRAY /dev/md0 metadata=1.2 name=openmediavault.local:zulu UUID=e722afd9:58035460:ce00e630:17883000 root@openmediavault:~# mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=openmediavault.local:zulu UUID=e722afd9:58035460:ce00e630:17883000 devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd root@openmediavault:~#
One of my RAID 5 disks is showing "Pre-fail" in the SMART Test and I'd like to replace it.
I have asked about this before and in my notes I have: "If the drive was showing errors and needed replacing, then Raid Management -> select the raid then click delete on the menu, select the drive click OK and the drive is removed, then proceed to add the new drive. All this can be done via the WebUI and no command line necessary."
My problem is that "Delete" button isn't available. See attachment.
What am I doing wrong? How do I replace the disk?
I assume there's a difference between "Remove" and "Delete"...?
Thanks!
S
-
as no answer was posted yet, I'd say its unlikely.
An internet search might help
I did search for that error but couldn't find anything meaningful...
-
Thanks gderf
-
Hi,
I'm using OMV 5.6.23-1.
The notification system is sending me *dozens and dozens* of these emails:
Code
Alles anzeigenThis is the mail system at host openmediavault.local. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <omv@mydomain.com>: delivery temporarily suspended: SASL authentication failed; server relay.plus.net[212.159.8.107] said: 535 Too many SMTP auth failures. Please try again later. Reporting-MTA: dns; openmediavault.local X-Postfix-Queue-ID: 4EFA240F19 X-Postfix-Sender: rfc822; omv@mydomain.com Arrival-Date: Sun, 9 Jan 2022 11:18:08 +0000 (GMT) Final-Recipient: rfc822; omv@mydomain.com Original-Recipient: rfc822;omv@mydomain.com Action: failed Status: 4.0.0 Diagnostic-Code: X-Postfix; delivery temporarily suspended: SASL authentication failed; server relay.plus.net[212.159.8.107] said: 535 Too many SMTP auth failures. Please try again later.
I'm guessing this is because I had to change my outgoing server from relay.plus.net to a new server a while ago and forgot to update the settings in OMV.
Are there messages stuck in the mail queue which need flushing? How do I do this please?
SM
-
You have to run --assemble from the cli, you've already blown your brownie points by rebooting
It can do if you run cat /proc/mdstat once the rebuild has started, but it should display in the GUI anyway
Where's the 'dodgy' disk, you don't know yet if the array will rebuild, let's do one step at a time, but to answer your question you can't slap in a new disk because the array is currently inactive.
My hands are a bit shaky and clammy but things seem to be rebuilding happily now.
Thanks again
S