Hello, i have a problem which affect me because i can't use my NAS.
I have created a raid 5 with 3 disks and when i'm trying to mount the system in EXT4 format i have an error 500 with the following message "Removing the root file system has been aborted" I don't understand why and i can't to anything."
I can't find anything on the internet about it and i'm kinda getting mad.
I don't know what is the /dev/dm-p file system
I've added pictures so u can see.
Error 500 while mouting file system
-
-
macom
Hat das Thema freigeschaltet. -
You are not providing enough information to identify the main problem. But you've marked a filesystem for deletion that contains the root file system (the operation system). That's a bad idea, so the system prevents you from doing that.
-
what do u need as information ?
-
-
I'm not trying the remove the OS, the OS is on another disk and i just want to create a file sistem with my raid. The OS is not on a disk that is used for the raid
-
I'm not trying the remove the OS, the OS is on another disk and i just want to create a file sistem with my raid. The OS is not on a disk that is used for the raid
The code tells a different story. A mount point configuration is going to be removed that is responsible for the root filesystem.
What is the output of the following commands?
Do you have installed the openmediavault-sharerootfs plugin?
-
i have added the screenshout of the output of command u asked me to do. I hope it will help u.
Yes i did but i removed it and i had the problem before installing that plugin
-
-
Antoine3331 Have you solved your problem? Perhaps the lack of further comment means votdev is satisfied there is no code error here.
From the data you provided you must have installed debian first using LVM and then used the script that installs OMV, OMV-Extras and Flashmemory.
Only after installing the shareroofs plugin would the "/dev/dm-0" device appear under filesystem, it's your root filesystem! The "dm" is an OMV abbreviation for a /dev/mapper/...
But you say you had a problem with RAID5 before this. Were you trying to create a RAID on USB drives?
-
I don't know how it was possible to come into this state because the UI should not allow you to delete the root file system, but i think deleting the file /var/lib/openmediavault/fstab_tasks.json should fix the issue.
-
I set this up in a VM:
Code
Alles anzeigenroot@omv-lvm:~# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 12 (bookworm) Release: 12 Codename: bookworm root@omv-lvm:~# lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS sda linux_raid_member 1.2 omv-lvm:0 56828c82-1be1-1fe0-3489-0d078af803f2 └─md0 ext4 1.0 1ea38155-0577-4734-b28a-3c089a65d727 19.5G 0% /srv/dev-disk-by-uuid-1ea38155-0577-4734-b28a-3c089a65d727 sdb linux_raid_member 1.2 omv-lvm:0 56828c82-1be1-1fe0-3489-0d078af803f2 └─md0 ext4 1.0 1ea38155-0577-4734-b28a-3c089a65d727 19.5G 0% /srv/dev-disk-by-uuid-1ea38155-0577-4734-b28a-3c089a65d727 sdc linux_raid_member 1.2 omv-lvm:0 56828c82-1be1-1fe0-3489-0d078af803f2 └─md0 ext4 1.0 1ea38155-0577-4734-b28a-3c089a65d727 19.5G 0% /srv/dev-disk-by-uuid-1ea38155-0577-4734-b28a-3c089a65d727 sr0 vda ├─vda1 vfat FAT32 A928-3592 499.3M 2% /boot/efi ├─vda2 ext2 1.0 5cf40bc4-2bc7-4979-8f1c-7a74723b3ad4 330.8M 22% /boot └─vda3 LVM2_member LVM2 001 8IL4PB-tZ3f-UEhA-dEna-5mq0-nB3U-y5Iaiq ├─omv--lvm--vg-root ext4 1.0 4a1aa5db-980b-43ac-a62a-a2b9eadcb34f 14.7G 11% /var/folder2ram/var/cache/samba │ /var/folder2ram/var/lib/monit │ /var/folder2ram/var/lib/rrdcached │ /var/folder2ram/var/spool │ /var/folder2ram/var/lib/openmediavault/rrd │ /var/folder2ram/var/tmp │ /var/folder2ram/var/log │ / └─omv--lvm--vg-swap_1 swap 1 5268a83c-f9c0-4515-89e9-b6584a4aacd7 [SWAP] root@omv-lvm:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jan 16 07:27:42 2024 Raid Level : raid5 Array Size : 20953088 (19.98 GiB 21.46 GB) Used Dev Size : 10476544 (9.99 GiB 10.73 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Tue Jan 16 10:47:53 2024 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : omv-lvm:0 (local to host omv-lvm) UUID : 56828c82:1be11fe0:34890d07:8af803f2 Events : 22 Number Major Minor RaidDevice State 0 8 32 0 active sync /dev/sdc 1 8 16 1 active sync /dev/sdb 2 8 0 2 active sync /dev/sda
I can delete the md raid as normal. If I try to umount the /dev/md-0 device the system prevents this and leaves the"pending config changes" message on screen. Undoing the changes leaves this in "/var/lib/openmediavault/fstab_tasks.json"
Code
Alles anzeigen]root@omv-lvm:~# cat /var/lib/openmediavault/fstab_tasks.json [ { "id": "delete", "func": "deleteEntry", "params": { "uuid": "79684322-3eac-11ea-a974-63a080abab18", "fsname": "\/dev\/disk\/by-uuid\/4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "dir": "\/", "type": "ext4", "opts": "errors=remount-ro", "freq": 0, "passno": 1, "hidden": true, "usagewarnthreshold": 0, "comment": "" } }, { "id": "delete", "func": "deleteEntry", "params": { "uuid": "79684322-3eac-11ea-a974-63a080abab18", "fsname": "\/dev\/disk\/by-uuid\/4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "dir": "\/", "type": "ext4", "opts": "errors=remount-ro", "freq": 0, "passno": 1, "hidden": true, "usagewarnthreshold": 0, "comment": "" } } ]root@omv-lvm:~#
Double entry because I hit undo twice.
votdev Is this the expected behaviour, or should var/lib/openmediavault/fstab_tasks.json be empty?
-
-
To continue:
votdev Having deleted the md raid, you hit a problem when you try to reinstate it in "Storage Filesystems":
This generates this error:
Code
Alles anzeigenRemoving the root file system has been aborted OMV\Exception: Removing the root file system has been aborted in /usr/share/openmediavault/engined/module/fstab.inc:59 Stack trace: #0 [internal function]: Engined\Module\FSTab->deleteEntry() #1 /usr/share/php/openmediavault/engine/module/moduleabstract.inc(165): call_user_func_array() #2 /usr/share/openmediavault/engined/module/fstab.inc(35): OMV\Engine\Module\ModuleAbstract->execTasks() #3 /usr/share/openmediavault/engined/rpc/config.inc(175): Engined\Module\FSTab->preDeploy() #4 [internal function]: Engined\Rpc\Config->applyChanges() #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array() #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod() #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(622): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}() #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(146): OMV\Rpc\ServiceAbstract->execBgProc() #9 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg() #10 [internal function]: Engined\Rpc\Config->applyChangesBg() #11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array() #12 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod() #13 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call() #14 {main}
-
-
Please open an GitHub issue and add all necessary information. Most important is HOW to reproduce the issue. I need to know how to exactly rebuild such a system.
-
-
votdev I have reproduced a system like Antoine3331 in a OMV7 virtual machine. Debian installed with root on LVM. Applied script to install OMV etc. Added both md and sharerootfs plugins. As OMV OS has root on LVM ( a /dev/mapper device ) it appears under filesystems as "/dev/dm-o". Create an md raid5 as normal. All is ok.
Next action was to attempt to unmount /dev/dm-o whic the system prevents it with normal warning message.. Next action was to umount and delete the md raid, no error was encountered. Next action was to re-create the raid in MD section and then mount the re-created raid under filesystems. It is this last action under "Storage | Filesystems" which generates the error. Removing the sharerootfs plugin is NOT a workaround.
Content of relevant jason file are still this:
Code
Alles anzeigenRemoving the root file system has been aborted OMV\Exception: Removing the root file system has been aborted in /usr/share/openmediavault/engined/module/fstab.inc:59 Stack trace: #0 [internal function]: Engined\Module\FSTab->deleteEntry() #1 /usr/share/php/openmediavault/engine/module/moduleabstract.inc(165): call_user_func_array() #2 /usr/share/openmediavault/engined/module/fstab.inc(35): OMV\Engine\Module\ModuleAbstract->execTasks() #3 /usr/share/openmediavault/engined/rpc/config.inc(175): Engined\Module\FSTab->preDeploy() #4 [internal function]: Engined\Rpc\Config->applyChanges() #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array() #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod() #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(622): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}() #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(146): OMV\Rpc\ServiceAbstract->execBgProc() #9 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg() #10 [internal function]: Engined\Rpc\Config->applyChangesBg() #11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array() #12 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod() #13 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call() #14 {main}
My question at end of #9 above remains the same.
-
Can you please post the output of omv-confdbadm read --prettify conf.system.filesystem.mountpoint?
-
Next action was to attempt to unmount /dev/dm-o whic the system prevents it with normal warning message.
It should not be possible to select the action button for the root file system, the action button should be disabled.
Please post the output of omv-rpc -u admin 'FileSystemMgmt' 'getList' '{"start":0,"limit":-1}' | jq
-
-
And please open a GitHub issue. The information to identify the problem getting more and more. Tracking such an issue in the forum is not a good approach, GH is better for doing this. Additionally the fixed code can reference the GH issue which is helpful to understand the problem later.
-
votdev Info you requested:
Code
Alles anzeigenroot@omv-lvm:~# omv-confdbadm read --prettify conf.system.filesystem.mountpoint [ { "comment": "", "dir": "/", "freq": 0, "fsname": "/dev/disk/by-uuid/4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "hidden": true, "opts": "errors=remount-ro", "passno": 1, "type": "ext4", "usagewarnthreshold": 0, "uuid": "79684322-3eac-11ea-a974-63a080abab18" } ] root@omv-lvm:~# omv-rpc -u admin 'FileSystemMgmt' 'getList' '{"start":0,"limit":-1}' | jq { "total": 1, "data": [ { "devicename": "mapper/omv--lvm--vg-root", "devicefile": "/dev/disk/by-uuid/4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "predictabledevicefile": "/dev/disk/by-uuid/4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "canonicaldevicefile": "/dev/dm-0", "parentdevicefile": "/dev/dm-0", "devlinks": [ "/dev/disk/by-id/dm-name-omv--lvm--vg-root", "/dev/disk/by-id/dm-uuid-LVM-qrB1zl0Sshr0gzosPHD4NeJC7D2aLNF50m40r7lb7304SjsnCw6V0hOOeMmByPrq", "/dev/disk/by-uuid/4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "/dev//dev/mapper/omv--lvm--vg-root", "/dev//dev/omv-lvm-vg/root" ], "uuid": "4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "label": "", "type": "ext4", "blocks": "18446160", "mounted": true, "mountpoint": "/", "used": "1.93 GiB", "available": "15823626240", "size": "18888867840", "percentage": 12, "description": "/dev/dm-0 [EXT4, 1.93 GiB (12%) used, 14.73 GiB available]", "propposixacl": true, "propquota": true, "propresize": true, "propfstab": true, "propcompress": false, "propautodefrag": false, "hasmultipledevices": false, "devicefiles": [ "/dev/dm-0" ], "comment": "", "_readonly": false, "_used": false, "propreadonly": false, "usagewarnthreshold": 0, "mountopts": "errors=remount-ro", "status": 1 } ] } root@omv-lvm:~#
-
-
-
votdev As requested:
Code
Alles anzeigenroot@omv-lvm:~# findmnt --output-all --nofsroot --json --types=noautofs > findmnt.txt root@omv-lvm:~# head -30 findmnt.txt { "filesystems": [ { "avail": "14.7G", "freq": null, "fsroot": "/", "fstype": "ext4", "fs-options": "rw,errors=remount-ro", "id": 26, "label": null, "maj:min": "253:0", "options": "rw,relatime,errors=remount-ro", "opt-fields": "shared:1", "parent": 1, "partlabel": null, "partuuid": null, "passno": null, "propagation": "shared", "size": "17.6G", "source": "/dev/mapper/omv--lvm--vg-root", "sources": [ "/dev/mapper/omv--lvm--vg-root" ], "target": "/", "tid": 100771, "used": "1.9G", "use%": "11%", "uuid": "4a1aa5db-980b-43ac-a62a-a2b9eadcb34f", "vfs-options": "rw,relatime", "children": [ root@omv-lvm:~# realpath /dev/mapper/omv--lvm--vg-root /dev/dm-0 root@omv-lvm:~#
findmnt.txt @ https://pastebin.com/DGkP2sUq around 1100 lines long.
-
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!