lost two filesystems - Filesystems page shows two blank rows after copying a file to /dev/sdx

  • Hi all,

    I am really stuck here....I have lost two disks filesystems/mounts and the only change I can line up to this is changing SMART monitoring.

    I edited OMV to enable SMART for my disks and shortly afterwards noticed I was unable to browse my shares. I have rolled back SMART changes with no effect.


    My two largest disks - essentially copies of each other appear to have dropped their filesystem/mounts from the GUI and they now present in Storage > Filesystems as blank rows (see pic)

    As a consequence, I have lost share access to shares on these two disks/filesystems.

    If I try to edit these using the GUI it flicks the Software error banner and throws me back to the main screen



    Here is how fstab, lsblk and blkid looked at that point....

    # /etc/fstab: static file system information.

    #

    # Use 'blkid' to print the universally unique identifier for a

    # device; this may be used with UUID= as a more robust way to name devices

    # that works even if disks are added and removed. See fstab(5).

    #

    # systemd generates mount units based on this file, see systemd.mount(5).

    # Please run 'systemctl daemon-reload' after making changes here.

    #

    # <file system> <mount point> <type> <options> <dump> <pass>

    # / was on /dev/sda1 during installation

    UUID=c8a63beb-c10d-4465-a9ed-eaa775bb14df / ext4 errors=remount-ro 0 1

    # swap was on /dev/sda5 during installation

    UUID=a0f8e7f5-90ba-4b5f-9b51-46f6a82880e3 none swap sw 0 0

    # >>> [openmediavault]

    /dev/disk/by-uuid/c4deb0db-e16d-498f-a364-aa9ff6bf801d /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    /dev/disk/by-uuid/f55fe3bb-f3cd-4287-8461-d761c14894b6 /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    /dev/disk/by-uuid/af890583-bcab-416d-b26e-6a3fe69c8405 /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    # <<< [openmediavault]

    root@NAS1:/etc/openmediavault# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 232.9G 0 disk

    ├─sda1 8:1 0 231.9G 0 part /

    ├─sda2 8:2 0 1K 0 part

    └─sda5 8:5 0 976M 0 part [SWAP]

    sdb 8:16 0 3.6T 0 disk

    └─sdb1 8:17 0 3.6T 0 part /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d

    sdc 8:32 0 3.6T 0 disk

    └─sdc1 8:33 0 3.6T 0 part /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6

    sdd 8:48 0 1.8T 0 disk

    └─sdd1 8:49 0 1.8T 0 part /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    root@NAS1:/etc/openmediavault# blkid

    /dev/sda1: UUID="c8a63beb-c10d-4465-a9ed-eaa775bb14df" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="5c9ad793-01"

    /dev/sda5: UUID="a0f8e7f5-90ba-4b5f-9b51-46f6a82880e3" TYPE="swap" PARTUUID="5c9ad793-05"

    /dev/sdd1: UUID="af890583-bcab-416d-b26e-6a3fe69c8405" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f7247f28-1763-483d-b631-a3bbf113d3fe"

    /dev/sdc1: PARTUUID="72f9708f-52ff-4f2c-a3d9-56bff2cc28f1"

    /dev/sdb1: PARTUUID="4e93bf1a-f66d-45a7-b438-2f84c766dbfb"

    root@NAS1:/etc/openmediavault#


    then after a reboot lsblk changed..and the /srv/dev-disk-by-uuid's disappeared off two disks

    root@NAS1:~# lsblk

    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

    sda 8:0 0 232.9G 0 disk

    ├─sda1 8:1 0 231.9G 0 part /

    ├─sda2 8:2 0 1K 0 part

    └─sda5 8:5 0 976M 0 part [SWAP]

    sdb 8:16 0 3.6T 0 disk

    └─sdb1 8:17 0 3.6T 0 part

    sdc 8:32 0 3.6T 0 disk

    └─sdc1 8:33 0 3.6T 0 part

    sdd 8:48 0 1.8T 0 disk

    └─sdd1 8:49 0 1.8T 0 part /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    root@NAS1:~# blkid

    /dev/sda1: UUID="c8a63beb-c10d-4465-a9ed-eaa775bb14df" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="5c9ad793-01"

    /dev/sda5: UUID="a0f8e7f5-90ba-4b5f-9b51-46f6a82880e3" TYPE="swap" PARTUUID="5c9ad793-05"

    /dev/sdd1: UUID="af890583-bcab-416d-b26e-6a3fe69c8405" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f7247f28-1763-483d-b631-a3bbf113d3fe"

    /dev/sdb1: PARTUUID="4e93bf1a-f66d-45a7-b438-2f84c766dbfb"

    /dev/sdc1: PARTUUID="72f9708f-52ff-4f2c-a3d9-56bff2cc28f1"

    root@NAS1:~#


    From config.xml I can see the matching UUID, filesystem name type and type, etc

    Of the three disks below, only the top two are impacted.


    <mntent>

    <uuid>aa34012f-dca0-43ca-9de2-433bb44dcf54</uuid>

    <fsname>/dev/disk/by-uuid/c4deb0db-e16d-498f-a364-aa9ff6bf801d</fsname>

    <dir>/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d</dir>

    <type>ext4</type>

    <opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>85</usagewarnthreshold>

    <comment>4TB-1</comment>

    </mntent>

    <mntent>

    <uuid>f5e904f8-3332-4ce7-a446-67f274460cf3</uuid>

    <fsname>/dev/disk/by-uuid/f55fe3bb-f3cd-4287-8461-d761c14894b6</fsname>

    <dir>/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6</dir>

    <type>ext4</type>

    <opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>85</usagewarnthreshold>

    <comment></comment>

    </mntent>

    <mntent>

    <uuid>9ad9b53f-281a-4b9a-bfe8-98295f30a443</uuid>

    <fsname>/dev/disk/by-uuid/af890583-bcab-416d-b26e-6a3fe69c8405</fsname>

    <dir>/srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405</dir>

    <type>ext4</type>

    <opts>defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>

    <freq>0</freq>

    <passno>2</passno>

    <hidden>0</hidden>

    <usagewarnthreshold>85</usagewarnthreshold>

    <comment></comment>

    </mntent>

    </fstab>


    Is it possible to run a CLI mount command using portions of these UUIDs and devices and fs type values to reinstate these disks....such that they match the config.xml and hopefully return to the GUI and ultimately the shares become usable again?


    I am not confident with the command, and I dont want to make a bad situation worse than it is...

    Is there someone who can guide me a little please?

    Thanks

    Bob

  • Hi Macom,

    Thanks for your response. mount -a returns nothing..see below.

    I see the third smaller disk (/dev/sdd1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405) when I df -h

    but not the messed up disks.

    Thanks

    Bob



    root@NAS1:~# mount -a

    root@NAS1:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.5M 378M 1% /run

    /dev/sda1 228G 5.5G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdd1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 8.0K 1.9G 1% /tmp

    shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/df96120819339e55780801cc2557cda97467b194e65787532d156669c85ad546/userdata/shm

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/bebf05c9fc95c9cbe6c1c0587e442e58ebcc9f1463c5dcf16f49032034a6e9b0/merged

    shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/ff1ce3902b452ac515e38d8a39a98c2d5c4f6a610bea8cfaa21eb6126899f7dc/userdata/shm

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/83407569c5b3cab9cbdd87dfcead4eb45efa720d4927ee2c1023180a6c7946d4/merged

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/12d9f3a6626730ec88cb31449b31673849b7556bbab2ed272698ad35c37a215f/merged

    overlay 228G 5.5G 211G 3% /var/lib/containers/storage/overlay/721279de56a6533a83b27c6241cafdd2c805306eefe17de63b5845f5e87753c4/merged

    root@NAS1:~#

  • Thanks again. looks like only the good one is displaying.


    root@NAS1:~# mount -av

    / : ignored

    none : ignored

    /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405: already mounted

    root@NAS1:~#


    also a snippet of syslog


    root@NAS1:/var/log# tail -f syslog

    Sep 24 22:02:07 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' not mounted

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' unable to read filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' state

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' trying to restart

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:07 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' not mounted

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' unable to read filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' state

    Sep 24 22:02:07 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' trying to restart

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint

    Sep 24 22:02:07 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' not mounted

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' unable to read filesystem '/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' state

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' trying to restart

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d' status failed (1) -- /srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: Filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' not mounted

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' unable to read filesystem '/srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' state

    Sep 24 22:02:37 NAS1 monit[1302]: 'filesystem_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' trying to restart

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint

    Sep 24 22:02:37 NAS1 monit[1302]: 'mountpoint_srv_dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6' status failed (1) -- /srv/dev-disk-by-uuid-f55fe3bb-f3cd-4287-8461-d761c14894b6 is not a mountpoint


    Thanks

    Bob

  • yes, they exist, however appear empty...

    root@NAS1:/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d# ls -a

    . ..


    root@NAS1:/srv/dev-disk-by-uuid-c4deb0db-e16d-498f-a364-aa9ff6bf801d# ls -a

    . ..



    I also took one disk from OMV server and connected it to a Win11 laptop via SATA/USB adaptor and using powershell and WSL it sees same


    PS C:\WINDOWS\system32> wsl --mount \\.\PHYSICALDRIVE2

    The disk was attached but failed to mount: Invalid argument.

    For more details, run 'dmesg' inside WSL2.

    To detach the disk, run 'wsl.exe --unmount \\.\PHYSICALDRIVE2'.

    PS C:\WINDOWS\system32>

    PS C:\WINDOWS\system32> wsl

    To run a command as administrator (user "root"), use "sudo <command>".

    See "man sudo_root" for details.


    bobj@DESKTOP-MHHRVA9:/mnt/c/WINDOWS/system32$ cd /mnt/wsl

    bobj@DESKTOP-MHHRVA9:/mnt/wsl$ ll

    total 8

    drwxrwxrwt 3 root root 80 Sep 25 08:50 ./

    drwxr-xr-x 5 root root 4096 Sep 25 08:44 ../

    drwxr-xr-x 2 root root 40 Sep 25 08:50 PHYSICALDRIVE2/

    -rw-r--r-- 1 root root 198 Sep 25 08:50 resolv.conf

    bobj@DESKTOP-MHHRVA9:/mnt/wsl$ cd PHYSICALDRIVE2/

    bobj@DESKTOP-MHHRVA9:/mnt/wsl/PHYSICALDRIVE2$ ll

    total 0

    drwxr-xr-x 2 root root 40 Sep 25 08:50 ./

    drwxrwxrwt 3 root root 80 Sep 25 08:50 ../

    bobj@DESKTOP-MHHRVA9:/mnt/wsl/PHYSICALDRIVE2$


    I suspect the story will be the same for the other rsync replicated disk.

    short of data recovery, I think my data is gone..... :(

    I do have backups of most of it across another platform, but not most recent items.

    I will get a couple of recovery tools and see if the data is present and visible and recoverable.


    Unless you can think of anything else, thanks for your assistance.

  • I found the root cause......a warning to others.....any clues if this is un-do-able?


    Here is how I can break it...


    I rebuilt one of the two originally impacted data disks... added filesystem, etc which cleared it out.

    Added share and SMB share and can view and edit content on it....

    The last one displayed below. in this df -h below......


    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 8.0K 1.9G 1% /tmp

    /dev/sdb1 3.6T 44K 3.6T 1% /srv/dev-disk-by-uuid-0b6af4d0-9ae9-4f6d-af52-113c594d8a32


    Here is the bit I missed previously,

    I thought my issue was related to enabling SMART for the disks....it was not.

    Prior to my original issue starting I now recall that I wanted to take a copy of the /etc/openmediavault/config.xml file

    So I copied it across to what I thought was the correct spot using CLI....boy did I screw that up....


    root@NAS1:/etc/openmediavault# cp config.xml /dev/sdb1 < -----WRONG!! :cursing::cursing::cursing:

    I should have copied to /srv/dev-disk-by-uuid-0b6af4d0-9ae9-4f6d-af52-113c594d8a32

    I work with RHEL systems a lot in my day to day job and we work almost exclusively with the native /dev/sdx items.....so its habit.


    See below....notice the last line of df -h below for the sdb1 partition....its doubled in size and zero available and 100% full...

    root@NAS1:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 8.0K 1.9G 1% /tmp

    /dev/sdb1 7.8T 4.2T 0 100% /srv/dev-disk-by-uuid-0b6af4d0-9ae9-4f6d-af52-113c594d8a32


    and of course, if I head back to the filesystems page, the original issue is also now visible for the new disk....




    So the root cause is known and the issue is repeatable....and I guess others may end up repeating my mistake.


    Two questions.....

    1) - I still have the first disk disconnected and out of the OMV server. Do you think there is any way to reverse the situation for it, now that I have determined a root cause?

    2) - how do we stop other users making the same error?


    Thanks

    Bob

  • Thanks, but I have only physical disks to use and sort out :(


    OK, I have confirmed the same behaviour is definitely repeatable.


    Cleaned up the disk, re connected to OMV, detected ok, added filesystem and mounted and then shared and shared on SMB.

    tested read/write/delete files and folders....


    root@NAS1:/tmp# cd /etc/openmediavault/

    root@NAS1:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 64K 1.9G 1% /tmp

    /dev/sdb1 3.6T 4.4M 3.6T 1% /srv/dev-disk-by-uuid-51dcaa8d-78a7-4dee-a8d7-19b6084f057d <-- this is the one just created


    then copied the file into the wrong spot again

    root@NAS1:/etc/openmediavault# ls

    config.xml php.ini

    root@NAS1:/etc/openmediavault# cp config.xml /dev/sdb1


    Re-check the df -h and the size has doubled and the usage is 100%

    root@NAS1:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 1.8G 0 1.8G 0% /dev

    tmpfs 381M 2.2M 379M 1% /run

    /dev/sda1 228G 5.6G 211G 3% /

    tmpfs 1.9G 84K 1.9G 1% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    /dev/sdc1 1.8T 14G 1.8T 1% /srv/dev-disk-by-uuid-af890583-bcab-416d-b26e-6a3fe69c8405

    tmpfs 1.9G 64K 1.9G 1% /tmp

    /dev/sdb1 7.8T 4.2T 0 100% /srv/dev-disk-by-uuid-51dcaa8d-78a7-4dee-a8d7-19b6084f057d

    root@NAS1:/etc/openmediavault#


    And at this point I get 'missing' shown on the filesystems GUI as shown previously....


    I will stop at this point and not restart or reboot anything in case there are logs, etc you might need.

    I have a 2nd OMV running 5.x and it might be interesting to see, if I have a spare disk, if I can test the effect on that as well later this evening.


    Thanks

    Bob

  • draggaj

    Hat das Label OMV 6.x hinzugefügt.
  • draggaj

    Hat den Titel des Themas von „lost two filesystems - Filesystems page shows two blank rows coinciding with a SMART change.“ zu „lost two filesystems - Filesystems page shows two blank rows after copying a file to /dev/sdx“ geändert.
  • Thread title changed to remove references to SMART - SMART is not the issue here. Copying to the native /dev/sdx is the issue.


    For the record, I have another OMV, this one is Ver 3.0.99 (Erasmus) and it too exhibits the same behaviour.


    added a USB stick, wiped it and added filesystem and mounted it.... /dev/sdh, last one in the list


    root@fatnas:~# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 10M 0 10M 0% /dev

    tmpfs 790M 83M 707M 11% /run

    /dev/sde1 71G 3.3G 64G 5% /

    tmpfs 2.0G 0 2.0G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup

    tmpfs 2.0G 16K 2.0G 1% /tmp

    /dev/sdb1 2.7T 1.5T 1.3T 53% /srv/dev-disk-by-label-3TB

    /dev/md127 1.8T 285G 1.6T 16% /srv/dev-disk-by-label-raida

    /dev/md128 3.6T 104G 3.5T 3% /srv/dev-disk-by-id-md-name-fatnas-RAIDB

    /dev/sdh1 15G 41M 15G 1% /srv/dev-disk-by-label-ExtUSB


    then copied a file into /dev/sdh1


    root@fatnas:/etc/openmediavault# cp config.xml /dev/sdh1


    Now go and check disk size and similar outcome to later release


    root@fatnas:/etc/openmediavault# df -h

    Filesystem Size Used Avail Use% Mounted on

    udev 10M 0 10M 0% /dev

    tmpfs 790M 83M 707M 11% /run

    /dev/sde1 71G 3.3G 64G 5% /

    tmpfs 2.0G 0 2.0G 0% /dev/shm

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup

    tmpfs 2.0G 16K 2.0G 1% /tmp

    /dev/sdb1 2.7T 1.5T 1.3T 53% /srv/dev-disk-by-label-3TB

    /dev/md127 1.8T 285G 1.6T 16% /srv/dev-disk-by-label-raida

    /dev/md128 3.6T 104G 3.5T 3% /srv/dev-disk-by-id-md-name-fatnas-RAIDB

    /dev/sdh1 26Z 26Z 0 100% /srv/dev-disk-by-label-ExtUSB


    and filesystem goes missing

    even though it shows Mounted = No above, if I try to re-create the filesystem (the /dev/sdh is selectable again), OMV throws an exception as it says its still mounted.


    I'm not asking for any support or advice for the older release, just using it as a comparison for the later release.


    back to my two questions previously posed.....


    1) - I still have the first disk disconnected and out of the OMV server. Do you think there is any way to reverse the situation for it, now that I have determined a root cause?

    2) - how do we stop other users making the same error?


    Thanks

    Bob

  • disks have been wiped and reinstalled and added and filesystems re-added. Older missing filesystems have been unmounted and that removes them from the GUI.


    I suspect there is no recovery from this from reading a few other debian sites. I attempted file and partiion recovery on one of the tw affected disks, but that only resulted in a smaller than normal disk/partition size and a small number of garbage files.


    lets call this one closed caused by a PICNIC error.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!