Beiträge von draggaj

    disks have been wiped and reinstalled and added and filesystems re-added. Older missing filesystems have been unmounted and that removes them from the GUI.


    I suspect there is no recovery from this from reading a few other debian sites. I attempted file and partiion recovery on one of the tw affected disks, but that only resulted in a smaller than normal disk/partition size and a small number of garbage files.


    lets call this one closed caused by a PICNIC error.

    Thread title changed to remove references to SMART - SMART is not the issue here. Copying to the native /dev/sdx is the issue.


    For the record, I have another OMV, this one is Ver 3.0.99 (Erasmus) and it too exhibits the same behaviour.


    added a USB stick, wiped it and added filesystem and mounted it.... /dev/sdh, last one in the list


    root@fatnas:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 10M 0 10M 0% /dev

    tmpfs 790M 83M 707M 11% /run

    /dev/sde1 71G 3.3G 64G 5% /

    tmpfs 2.0G 0 2.0G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup

    tmpfs 2.0G 16K 2.0G 1% /tmp

    /dev/sdb1 2.7T 1.5T 1.3T 53% /srv/dev-disk-by-label-3TB

    /dev/md127 1.8T 285G 1.6T 16% /srv/dev-disk-by-label-raida

    /dev/md128 3.6T 104G 3.5T 3% /srv/dev-disk-by-id-md-name-fatnas-RAIDB

    /dev/sdh1 15G 41M 15G 1% /srv/dev-disk-by-label-ExtUSB


    then copied a file into /dev/sdh1


    root@fatnas:/etc/openmediavault# cp config.xml /dev/sdh1


    Now go and check disk size and similar outcome to later release


    root@fatnas:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 10M 0 10M 0% /dev

    tmpfs 790M 83M 707M 11% /run

    /dev/sde1 71G 3.3G 64G 5% /

    tmpfs 2.0G 0 2.0G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup

    tmpfs 2.0G 16K 2.0G 1% /tmp

    /dev/sdb1 2.7T 1.5T 1.3T 53% /srv/dev-disk-by-label-3TB

    /dev/md127 1.8T 285G 1.6T 16% /srv/dev-disk-by-label-raida

    /dev/md128 3.6T 104G 3.5T 3% /srv/dev-disk-by-id-md-name-fatnas-RAIDB

    /dev/sdh1 26Z 26Z 0 100% /srv/dev-disk-by-label-ExtUSB


    and filesystem goes missing

    even though it shows Mounted = No above, if I try to re-create the filesystem (the /dev/sdh is selectable again), OMV throws an exception as it says its still mounted.


    I'm not asking for any support or advice for the older release, just using it as a comparison for the later release.


    back to my two questions previously posed.....


    1) - I still have the first disk disconnected and out of the OMV server. Do you think there is any way to reverse the situation for it, now that I have determined a root cause?

    2) - how do we stop other users making the same error?


    Thanks

    Bob

    Thanks, but I have only physical disks to use and sort out :(


    OK, I have confirmed the same behaviour is definitely repeatable.


    Cleaned up the disk, re connected to OMV, detected ok, added filesystem and mounted and then shared and shared on SMB.

    tested read/write/delete files and folders....


    root@NAS1:/tmp# cd /etc/openmediavault/

    root@NAS1:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 64K 1.9G 1% /tmp

    /dev/sdb1 3.6T 4.4M 3.6T 1% /srv/dev-disk-by-uuid-51dcaa8d-78a7-4dee-a8d7-19b6084f057d <-- this is the one just created


    then copied the file into the wrong spot again

    root@NAS1:/etc/openmediavault# ls

    config.xml php.ini

    root@NAS1:/etc/openmediavault# cp config.xml /dev/sdb1


    Re-check the df -h and the size has doubled and the usage is 100%

    root@NAS1:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 64K 1.9G 1% /tmp

    /dev/sdb1 7.8T 4.2T 0 100% /srv/dev-disk-by-uuid-51dcaa8d-78a7-4dee-a8d7-19b6084f057d

    root@NAS1:/etc/openmediavault#


    And at this point I get 'missing' shown on the filesystems GUI as shown previously....


    I will stop at this point and not restart or reboot anything in case there are logs, etc you might need.

    I have a 2nd OMV running 5.x and it might be interesting to see, if I have a spare disk, if I can test the effect on that as well later this evening.


    Thanks

    Bob

    When creating a largish filesystem, in my example, 4TB, the Create filesystem GUI wanders off the screen horizontally after starting off as 5 columns of incrementing numbers.

    A casual glance of this initially I thought this had stalled as the numbers no longer incremented on screen....the sweeping progress indicator at the top was still moving, but the numbers stopped.

    It was then I noticed the horizontal scroll bar....I suspect its related to the change from 4 to 5 digits. It does complete successfully.



    I found the root cause......a warning to others.....any clues if this is un-do-able?


    Here is how I can break it...


    I rebuilt one of the two originally impacted data disks... added filesystem, etc which cleared it out.

    Added share and SMB share and can view and edit content on it....

    The last one displayed below. in this df -h below......


    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 8.0K 1.9G 1% /tmp

    /dev/sdb1 3.6T 44K 3.6T 1% /srv/dev-disk-by-uuid-0b6af4d0-9ae9-4f6d-af52-113c594d8a32


    Here is the bit I missed previously,

    I thought my issue was related to enabling SMART for the disks....it was not.

    Prior to my original issue starting I now recall that I wanted to take a copy of the /etc/openmediavault/config.xml file

    So I copied it across to what I thought was the correct spot using CLI....boy did I screw that up....


    root@NAS1:/etc/openmediavault# cp config.xml /dev/sdb1 < -----WRONG!! :cursing::cursing::cursing:

    I should have copied to /srv/dev-disk-by-uuid-0b6af4d0-9ae9-4f6d-af52-113c594d8a32

    I work with RHEL systems a lot in my day to day job and we work almost exclusively with the native /dev/sdx items.....so its habit.


    See below....notice the last line of df -h below for the sdb1 partition....its doubled in size and zero available and 100% full...

    root@NAS1:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 8.0K 1.9G 1% /tmp

    /dev/sdb1 7.8T 4.2T 0 100% /srv/dev-disk-by-uuid-0b6af4d0-9ae9-4f6d-af52-113c594d8a32


    and of course, if I head back to the filesystems page, the original issue is also now visible for the new disk....




    So the root cause is known and the issue is repeatable....and I guess others may end up repeating my mistake.


    Two questions.....

    1) - I still have the first disk disconnected and out of the OMV server. Do you think there is any way to reverse the situation for it, now that I have determined a root cause?

    2) - how do we stop other users making the same error?


    Thanks

    Bob

    yes, they exist, however appear empty...

    root@NAS1:/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d# ls -a

    . ..


    root@NAS1:/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d# ls -a

    . ..



    I also took one disk from OMV server and connected it to a Win11 laptop via SATA/USB adaptor and using powershell and WSL it sees same


    PS C:\WINDOWS\system32> wsl --mount \\.\PHYSICALDRIVE2

    The disk was attached but failed to mount: Invalid argument.

    For more details, run 'dmesg' inside WSL2.

    To detach the disk, run 'wsl.exe --unmount \\.\PHYSICALDRIVE2'.

    PS C:\WINDOWS\system32>

    PS C:\WINDOWS\system32> wsl

    To run a command as administrator (user "root"), use "sudo <command>".

    See "man sudo_root" for details.


    bobj@DESKTOP-MHHRVA9:/mnt/c/WINDOWS/system32$ cd /mnt/wsl

    bobj@DESKTOP-MHHRVA9:/mnt/wsl$ ll

    total 8

    drwxrwxrwt 3 root root 80 Sep 25 08:50 ./

    drwxr-xr-x 5 root root 4096 Sep 25 08:44 ../

    drwxr-xr-x 2 root root 40 Sep 25 08:50 PHYSICALDRIVE2/

    -rw-r--r-- 1 root root 198 Sep 25 08:50 resolv.conf

    bobj@DESKTOP-MHHRVA9:/mnt/wsl$ cd PHYSICALDRIVE2/

    bobj@DESKTOP-MHHRVA9:/mnt/wsl/PHYSICALDRIVE2$ ll

    total 0

    drwxr-xr-x 2 root root 40 Sep 25 08:50 ./

    drwxrwxrwt 3 root root 80 Sep 25 08:50 ../

    bobj@DESKTOP-MHHRVA9:/mnt/wsl/PHYSICALDRIVE2$


    I suspect the story will be the same for the other rsync replicated disk.

    short of data recovery, I think my data is gone..... :(

    I do have backups of most of it across another platform, but not most recent items.

    I will get a couple of recovery tools and see if the data is present and visible and recoverable.


    Unless you can think of anything else, thanks for your assistance.

    Thanks again. looks like only the good one is displaying.


    root@NAS1:~# mount -av

    / : ignored

    none : ignored

    /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405: already mounted

    root@NAS1:~#


    also a snippet of syslog


    root@NAS1:/var/log# tail -f syslog

    Sep 24 22:02:07 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' not mounted

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' unable to read filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' state

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' trying to restart

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:07 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' not mounted

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' unable to read filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' state

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' trying to restart

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' not mounted

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' unable to read filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' state

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' trying to restart

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' not mounted

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' unable to read filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' state

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' trying to restart

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint


    Thanks

    Bob

    Hi Macom,

    Thanks for your response. mount -a returns nothing..see below.

    I see the third smaller disk (/dev/sdd1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405) when I df -h

    but not the messed up disks.

    Thanks

    Bob



    root@NAS1:~# mount -a

    root@NAS1:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.5M 378M 1% /run

    /dev/sda1 228G 5.5G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdd1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 8.0K 1.9G 1% /tmp

    shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/df96120819339e55780801cc2557cda97467b194e65787532d156669c85ad546/userdata/shm

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/bebf05c9fc95c9cbe6c1c0587e442e58ebcc9f1463c5dcf16f49032034a6e9b0/merged

    shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/ff1ce3902b452ac515e38d8a39a98c2d5c4f6a610bea8cfaa21eb6126899f7dc/userdata/shm

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/83407569c5b3cab9cbdd87dfcead4eb45efa720d4927ee2c1023180a6c7946d4/merged

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/12d9f3a6626730ec88cb31449b31673849b7556bbab2ed272698ad35c37a215f/merged

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/721279de56a6533a83b27c6241cafdd2c805306eefe17de63b5845f5e87753c4/merged

    root@NAS1:~#

    Hi all,

    I am really stuck here....I have lost two disks filesystems/mounts and the only change I can line up to this is changing SMART monitoring.

    I edited OMV to enable SMART for my disks and shortly afterwards noticed I was unable to browse my shares. I have rolled back SMART changes with no effect.


    My two largest disks - essentially copies of each other appear to have dropped their filesystem/mounts from the GUI and they now present in Storage > Filesystems as blank rows (see pic)

    As a consequence, I have lost share access to shares on these two disks/filesystems.

    If I try to edit these using the GUI it flicks the Software error banner and throws me back to the main screen



    Here is how fstab, lsblk and blkid looked at that point....

    # /etc/fstab: static file system information.

    #

    # Use 'blkid' to print the universally unique identifier for a

    # device; this may be used with UUID= as a more robust way to name devices

    # that works even if disks are added and removed. See fstab(5).

    #

    # systemd generates mount units based on this file, see systemd.mount(5).

    # Please run 'systemctl daemon-reload' after making changes here.

    #

    # <file system> <mount point> <type> <options> <dump> <pass>

    # / was on /dev/sda1 during installation

    UUID=c8a63beb-c10d-4465-a9ed-eaa775bb14df / ext4 errors=remount-ro 0 1

    # swap was on /dev/sda5 during installation

    UUID=a0f8e7f5-90ba-4b5f-9b51-46f6a82880e3 none swap sw 0 0

    # >>> [openmediavault]

    /dev/disk/by-uuid/c4deb0db-e16d-498f-a364-aa9ff6bf801d /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    /dev/disk/by-uuid/f55fe3bb-f3cd-4287-8461-d761c14894b6 /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    /dev/disk/by-uuid/af890583-bcab-416d-b26e-6a3fe69c8405 /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    # <<< [openmediavault]

    root@NAS1:/etc/openmediavault# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 232.9G 0 disk

    ├─sda1 8:1 0 231.9G 0 part /

    ├─sda2 8:2 0 1K 0 part

    └─sda5 8:5 0 976M 0 part [SWAP]

    sdb 8:16 0 3.6T 0 disk

    └─sdb1 8:17 0 3.6T 0 part /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d

    sdc 8:32 0 3.6T 0 disk

    └─sdc1 8:33 0 3.6T 0 part /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6

    sdd 8:48 0 1.8T 0 disk

    └─sdd1 8:49 0 1.8T 0 part /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    root@NAS1:/etc/openmediavault# blkid

    /dev/sda1: UUID="c8a63beb-c10d-4465-a9ed-eaa775bb14df" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="5c9ad793-01"

    /dev/sda5: UUID="a0f8e7f5-90ba-4b5f-9b51-46f6a82880e3" TYPE="swap" PARTUUID="5c9ad793-05"

    /dev/sdd1: UUID="af890583-bcab-416d-b26e-6a3fe69c8405" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f7247f28-1763-483d-b631-a3bbf113d3fe"

    /dev/sdc1: PARTUUID="72f9708f-52ff-4f2c-a3d9-56bff2cc28f1"

    /dev/sdb1: PARTUUID="4e93bf1a-f66d-45a7-b438-2f84c766dbfb"

    root@NAS1:/etc/openmediavault#


    then after a reboot lsblk changed..and the /srv/dev-disk-by-uuid's disappeared off two disks

    root@NAS1:~# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 232.9G 0 disk

    ├─sda1 8:1 0 231.9G 0 part /

    ├─sda2 8:2 0 1K 0 part

    └─sda5 8:5 0 976M 0 part [SWAP]

    sdb 8:16 0 3.6T 0 disk

    └─sdb1 8:17 0 3.6T 0 part

    sdc 8:32 0 3.6T 0 disk

    └─sdc1 8:33 0 3.6T 0 part

    sdd 8:48 0 1.8T 0 disk

    └─sdd1 8:49 0 1.8T 0 part /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    root@NAS1:~# blkid

    /dev/sda1: UUID="c8a63beb-c10d-4465-a9ed-eaa775bb14df" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="5c9ad793-01"

    /dev/sda5: UUID="a0f8e7f5-90ba-4b5f-9b51-46f6a82880e3" TYPE="swap" PARTUUID="5c9ad793-05"

    /dev/sdd1: UUID="af890583-bcab-416d-b26e-6a3fe69c8405" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f7247f28-1763-483d-b631-a3bbf113d3fe"

    /dev/sdb1: PARTUUID="4e93bf1a-f66d-45a7-b438-2f84c766dbfb"

    /dev/sdc1: PARTUUID="72f9708f-52ff-4f2c-a3d9-56bff2cc28f1"

    root@NAS1:~#


    From config.xml I can see the matching UUID, filesystem name type and type, etc

    Of the three disks below, only the top two are impacted.


    <mntent>

    <uuid>aa34012f-dca0-43ca-9de2-433bb44dcf54</uuid>

    <fsname>/dev/disk/by-uuid/c4deb0db-e16d-498f-a364-aa9ff6bf801d</fsname>

    <dir>/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d</dir>

    <type>ext4</type>

    <opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>85</usagewarnthreshold>

    <comment>4TB-1</comment>

    </mntent>

    <mntent>

    <uuid>f5e904f8-3332-4ce7-a446-67f274460cf3</uuid>

    <fsname>/dev/disk/by-uuid/f55fe3bb-f3cd-4287-8461-d761c14894b6</fsname>

    <dir>/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6</dir>

    <type>ext4</type>

    <opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>85</usagewarnthreshold>

    <comment></comment>

    </mntent>

    <mntent>

    <uuid>9ad9b53f-281a-4b9a-bfe8-98295f30a443</uuid>

    <fsname>/dev/disk/by-uuid/af890583-bcab-416d-b26e-6a3fe69c8405</fsname>

    <dir>/srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405</dir>

    <type>ext4</type>

    <opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>85</usagewarnthreshold>

    <comment></comment>

    </mntent>

    </fstab>


    Is it possible to run a CLI mount command using portions of these UUIDs and devices and fs type values to reinstate these disks....such that they match the config.xml and hopefully return to the GUI and ultimately the shares become usable again?


    I am not confident with the command, and I dont want to make a bad situation worse than it is...

    Is there someone who can guide me a little please?

    Thanks

    Bob

    my thoughts on this

    tying the dashboard to the cookies of a user's browser seems to take away from the whole idea (IMO) that a server dashboard is usually, consistently demonstrating the status of various parts of the server, not that the content varies depending on the client/browser being used.

    I for one, and I suspect there are many others, browse to and manage my OMV from a number of client devices, phone, tablet and a number of PCs, both windows and Linux. When offsite, I also can be managing it from other PCs.

    Giving a consistent experience to the dashboard (like the consistency of the rest of the OMV GUI) simply makes sense to me.


    Having the dashboard vary between devices, or nag me that "The dashboard has not yet been configured." is, in my mind, simply misleading and incorrect and misses the point of a dashboard.


    I'd expect there might be a number of preferred options for the dashboard....that might, in future, be available....

    1. Statically build and display the dashboard - (giving a consistency of experience regardless of client or browser or user)
    2. Dynamically build and display the dashboard based on the logged in user (assuming multiple admin users were used)
    3. Dynamically build and display the dashboard based on the client browser cookies arrangement (current method which users are discussing here and in a couple of other threads and which gives users variable results)
    4. Dont display a dashboard. (Assumes the dashboard would be a disable-able object maybe via a checkbox on the System > Workbench page. This would eliminate the false nag and the hide the dashboard tab altogether if ticked.)

    A big thanks to all for their efforts in developing OMV to where it is today.

    Just some oddites observed on a new V6 build


    Old V5, disconnected data disks, fresh V6 installation and then reconfigured to add old data drives back...all good, except...some weirdness, both self corrected.

    (1) using smb, and when browsing share from a Windows 11 machine, unable to browse to shares using \\<omvserverIPAddress>

    Finally took me an hour or so to discover that browse using \\<omvserverhostname> would allow access to the shares....

    Came back to this the next morning and browse by IPAddress and also by hostname both worked....no other changes made. Continues to work fine.


    (2) the next morning, when I browsed to the Admin UI, the GUI language had changed to french. language was default English when I left it last night. Changed back to English and it remains so....

    thank you for the excellent pointer - its been a while since I was on the forum and I was not specific enough on my search to find that.


    The resolution - based on the post linked above...


    At this point neither disk is selectable if I create a new array in Raid Management, as described above.


    root@fatnas:~# blkid <-- ran this command and I can see my two RAID1 mirror disks and the raid disk md127


    /dev/sda: UUID="3ef9e0b5-ebc7-53ff-783d-8f322d87e167" UUID_SUB="aab86480-2968-8f70-e0d6-70809717da46" LABEL="FatNAS:RAIDA" TYPE="linux_raid_member"
    /dev/sdb1: LABEL="3TB" UUID="05dd31f9-321b-4ebf-b088-9928defdedbb" TYPE="ext4" PARTUUID="51459904-895e-4977-bc40-ec923f4e43ed"
    /dev/sdd: UUID="3ef9e0b5-ebc7-53ff-783d-8f322d87e167" UUID_SUB="1c12974b-5ea8-42ab-0138-528ee61aa98a" LABEL="FatNAS:RAIDA" TYPE="linux_raid_member"
    /dev/sdc1: UUID="dc2abf96-23f1-4740-9b01-e41424802764" TYPE="ext4" PARTUUID="dd1ec230-01"
    /dev/sdc5: UUID="fb046a99-5036-4766-8d80-58e043a9a3c0" TYPE="swap" PARTUUID="dd1ec230-05"
    /dev/md127: LABEL="raida" UUID="ffd58de6-df75-46e4-9d88-5474a2c37494" TYPE="ext4"


    root@fatnas:~# cat /proc/mdstat <-- and confirmed the array status
    Personalities : [raid1]
    md127 : active (auto-read-only) raid1 sda[0] sdd[1]
    1953383360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>


    root@fatnas:~# mdadm --assemble /dev/md127 /dev/sdd --verbose --force
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdd is busy - skipping <-- cant do this cause the disk is already in md127


    root@fatnas:~# mdadm --assemble /dev/md127 /dev/sda --verbose --force
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is busy - skipping <-- ditto for the other disk


    root@fatnas:~# mdadm --stop /dev/md127<-- so stop it
    mdadm: stopped /dev/md127


    Now lets put humpty back together again, including both disks in the command..... :)


    root@fatnas:~# mdadm --assemble /dev/md127 /dev/sda /dev/sdd --verbose --force
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 1.
    mdadm: added /dev/sdd to /dev/md127 as 1
    mdadm: added /dev/sda to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 2 drives.
    root@fatnas:~#


    At this stage I navigate to Raid Management in the GUI and the Array is there!..woohoo and clean and both disks showing.
    Went to Filesystems and /dev/md127 is now showing, but unmounted - so mounted it
    Added a shared folder with the desired name
    Added the SMF/CIFs Share, etc.....
    all good and accessible with read write access, etc....


    :thumbup:

    had a 2.x build with 3TB single disk plus 2+2TB RAID 1 mirror storage + OS disk....
    tried the cli upgrade, no plug ins, still went badly, thru up a lot of errors with end result no GUI at the end and CLI would not let me in.
    so took the advice and went with a clean install..
    Pulled SATA cables from all 3 storage disks (marked them first)
    Reinstalled as 3.x, same hardware..all good.
    Added single 3TB disk, create FS (Ext4), added Shared Folder and SMB Share and User and all fine..
    Problem is with RAID1 disks when I reconnected them.
    I can see old RAID1 disks in Physical Disks page - they are plugged in identically as previous.
    I cannot see any disks (aka 'devices' in gui) to select in RAID Mgt page when Creating an array
    I can see and select either of the two old RAID1 disks in the File System page - but of course selecting, say, EXT4, would blow my data - according to the 'do you really want to format......' warning popup......nope!. selecting any other filesystem at the drop down gives same warning....


    Is there a way to reinstate the array in the newly built 3.x environment please?
    Any assistance would be appreciated..

    drifting slightly from the original thread, but only due to the version. I too get blank screens
    I installed a clean 0.5.24 install - no plug ins standard or otherwise - Web GUI works fine and login gui displayed in IE, or Firefox.
    Updated using GUI and now operating at 0.5.44 courtesy of the updated files.
    Now launching IE to main address of OMV gives blank GUI with no login dialogue box. BUT...OK on Firefox.
    Please don't say use Firefox as the user base for this are SOE'd to IE.
    Tested on two different hardware platforms, both AMD and also tested from browsers of 4 different PCs and laptops including SOE and non SOE machines with IE 9 and 10.


    This is repeatable.


    Any directions to resolve this - other than not updating or not using IE would be appreciated.