Filesystem missing issue

  • Hi All, having a new issue with omv6 after some tinkering.


    Situation: Upgraded some disks, retiring my raid 1 (mirrored) pair of 4tb hdd's for a pair of 8tb ssd's, again mirrored. I built the array yesterday, then spent all day copying using rsync archive to copy the old array to the new. I have a second array for CCTV but this seems fine and not impacted.


    The array is my main storage and contained some shared public folders, tv shows, films, music and such. Also the Personal folder that OMV makes for it's users to have personal folders. I went through all config settings in the ui to point everything at the new array. All seemed well until I rebooted.


    Now the public shares all work, I can watch the films from a remote pc just fine as before. But cannot mount the private folders anymore. Looking at omv there are errors I cannot clear.


    Under Storage > Software RAID I see both arrays, both clean. md1 is the new one. md127 is cctv.


    Now if I go to Storage > File Systems I see 3 items, the cctv along with the / disk, but the one for the new md1 is missing, litterally with the status of missing. Attempting to edit the empty entry shows this error:

    fsname: The value "" does not match exactly one schema of [{"type":"string","format":"fsuuid"},{"type":"string","format":"devicefile"},{"type":"string","format":"dirpath"}].


    I then checked fstab and it looks fine to my eye, nothing wrong and with the correct reference to the mount point.


    If I unmount partition and run fschk it comes back clean and fine. I have no reason to think the disks/filesystem/data are bad at this point since I can access 95% of it (via smb and 100% via the terminal using ssh) and df shows it's 42% full as expected.


    Editing and saving settings related to shares (in the omv ui) all go through fine but nothing changes. I see nothing serious in the system logs that would relate to this. I've checked smb config file and it looks good all pointing to the right places, tried restarting it via systemd and nothing changes. smb is not really the problem I think. If the files system error above can be cleared up I think smb will fully work.


    Notes: Old array was using disk-by-label which I read is legacy, new disk uses UUID. Docker was fixed by using symlink to the old disk directory in /srv/dev-disk-by-label-NAME/ which fixed all my dockers containers etc.


    Edit: I just discovered that rebooteing it breaks samba, I have to disable and enable it again to make it work.

    Edited 2 times, last by Noki: little easier to read ().

  • I'm leaning towards last resort reinstalling OMV from scratch and remounting and remaking everything. In theory the most complex parts are dockers etc, but they should just come back to life when pointed to the right place. Unless anyone can point me to something useful I am not finding much myself. Will keep looking but I think a clean start might be the way to go.

  • I've since tried clearing out fstab as described here. Then ran omv-salt deploy run fstab and it made no difference. I guess something is wrong in /etc/openmediavault/config.xml but it looks fine to me, nothing obviously wrong there. Will take a break and try later.

  • Post the output of

    cat /etc/fstab

    sudo omv-showkey mntent

    sudo omv-showkey fstab

  • NEW fstab (basically the same):

    sudo omv-showkey mntent:

    sudo omv-showkey fstab

  • As you said, the fstab looks correct with all mount points matching the mntent on the config.xml and seen by OMV

    Since you had the drives mounted by LABEL, maybe you still have some leftover pointing to it's old mountpoint instead of the new by-UUID


    If you want to dig deep, here's a command you might want to try:

    sudo omv-showkey shares to show where the mntent are referenced and cross check with what you have above.


    You can also check how the system is seeing the array drives, just in case:

    lsblk

    blkid

    cat /proc/mdstat


    I think it would be faster to just make print-screens from the available configs you have on the GUI, backup of any folders you see needed (docker/container folders for eg) and just try a fresh install on a seperate OS drive (a USB stick 32Gb is more than enough)

    Keep the old OS drive safe.

    You can use OMV7 already or just go with OMV6 still.


    Have your drives disconnected and only use the install disk and the drive you will put the OS.

    After you have OMV running, then plug back the RAID drives and mount (NOT create) them on the GUI.


    As long as the docker root is on a seperate drive OR container's config folders are volume binded to paths outside of the OS drive, the containers are easely recreated.

    See if you can redo all you need or not.


    If you can't, just put back the old OS drive and try to fix the issues.

  • Useful stuff, thanks!


    Here is the output, interestingly blkid shows md127 but not md1, will look into that now. Here is the output:

  • Noki It's not obvious what's happened here. In OMV6 I thought creating a new array via the WebUI would give each array a distinct label.


    Hi All, having a new issue with omv6 after some tinkering.


    Situation: Upgraded some disks, retiring my raid 1 (mirrored) pair of 4tb hdd's for a pair of 8tb ssd's, again mirrored. I built the array yesterday, then spent all day copying using rsync archive to copy the old array to the new. I have a second array for CCTV but this seems fine and not impacted.




    Personally, I would have removed and replaced each 4TB with a 8 TB drive in turn via the WebUI software RAID options and then resized the array. There would have been two lengthy rsyncs.

  • I think I already had this problem (in some form) before I started this migration. I'm leaning towards reinstall.


    Is there any issue jumping straight to omv7?


    I have some dockers container, all from docker hub like adguard, plex the main ones. Other than that it's just a NAS with smaba shares and some ftp setup for my CCTV cameras.

  • Noki Jumping to OMV7 will not solve this. As the 8TB filesystem is missing and presumably unmounted, have you checked it can be mounted at the CLI and all the data is there? E.g: mount /dev/md1  /mnt && cd /mnt The problem is that what are supposed to be raid members of different arrays have the same label "kablamo:1". At the moment I can't think of a fix while the 8TB RAID and CCTV RAID both remain in situ. A shutdown and physical removal of the CCTV RAID drives might allow the 8TB RAID array to be renamed at the CLI after a restart. But that's not something I've tested.


    By the way, what did you do with the pair of 4TB drives? Alternatively, if the 8TB RAID can be mounted then you could use one the 4TB drives as a rsync target once it's been wipe and had an EXT4 filesystem created on it. Then you destroy the 8TB RAID, wipe both the 8TB drives, re-create the 8TB RAID and then sync the data back again from the 4TB drive. If the 8TB RAID refuses to mount at the CLI and the 4TB drives were not wiped, then youcould destroy the current 8TB RAID first, then recreate it and then rsync 4TB RAID to 8TB RAID.

  • The new 8Tb array is mounted and shared in smb, I can access it from other computers on my network just fine (read and write). Only the users private folders wont share via samba. I assume they are different somehow. I can access the directory and files from the cli, so I know they are present on the array and ok.


    The old drives are sitting on my desk, unaltered since the copy. Though I have added new files since then to the 8Tb system, so needs to be synced (slightly).


    I made new DVD backups of crucial items, just in case. About 18 disks so by no means all of it.


    I don't expect OMV7 specifically to solve it, but starting fresh might reset everything and make it work.


    I had the idea to just move the private folder and share them like the other shares and just live with it. But really I want a full fix.


    You did give me the idea to remove the CCTV drives and reboot. See what happens. Maybe that array is in the way somehow.


    I have been through the configs I can find around arrays, disks, mounts whatever and can see nothing wrong, just some references to old disks from years ago that no longer exist. Removed those and nothing changed, good or bad.


    I'll figure out what to do by the weekend and try again to resolve it.

  • Noki To clarify your current situation, please (re)-post the output of the following:



    1. Info about disks and ARRAYS


    blkid


    mdadm --detail --scan


    cat /etc/mdadm/mdadm.conf


    mdadm -E /dev/sd[a-z] | grep -E "dev|UUID|Name|Update|Check"



    2. Info about filesystem and mounts


    cat /etc/fstab


    findmnt --real


    lsblk -f


  • sda and sdb are currently the Samsung 8Tb ssd's, forming the new main array filesystem: 29b1084b-a7f3-432d-99b9-1f71e9a2c383


    sdc is an older 80gb intel ssd for root, swap and efi


    sde and sdf are currently the hdd's, a pair of toshiba 2tb surveillance disks: 9c535bde-4e53-451d-9f15-e514e68dafb6


  • sorry

    BOOM md0 is no longer present in the system, those are the old disks. No references to md127.

  • Noki Thanks. As I mentioned above it's not normal to see two different arrays with the same name, which is kablamo:1 in your case. When the ARRAY details in mdadm --detail --scan don't match those in /etc/mdadm/mdadm.conf you'd normally just execute this command at the CLI:


    omv-salt deploy run initramfs mdadm


    As the two arrays have unique ARRAY UUIDS, as do the filesystem they contain, I believe this should still work as expected. So, I'd execute that command, re-boot your system and check if all the filesystems are present. Not being able to access that "Personal" folder is I think a separate issue.

  • debian:

    ----------

    ID: update_initramfs_nop

    Function: test.nop

    Result: True

    Comment: Success!

    Started: 11:30:30.446246

    Duration: 1.101 ms

    Changes:

    ----------

    ID: update_initramfs

    Function: cmd.run

    Name: update-initramfs -u

    Result: True

    Comment: Command "update-initramfs -u" run

    Started: 11:30:30.449318

    Duration: 30495.375 ms

    Changes:

    ----------

    pid:

    2476648

    retcode:

    0

    stderr:

    stdout:

    update-initramfs: Generating /boot/initrd.img-6.1.0-0.deb11.13-amd64

    ----------

    ID: remove_cron_daily_mdadm

    Function: file.absent

    Name: /etc/cron.daily/mdadm

    Result: True

    Comment: File /etc/cron.daily/mdadm is not present

    Started: 11:31:00.957716

    Duration: 2.832 ms

    Changes:

    ----------

    ID: divert_cron_daily_mdadm

    Function: omv_dpkg.divert_add

    Name: /etc/cron.daily/mdadm

    Result: True

    Comment: Leaving 'local diversion of /etc/cron.daily/mdadm to /etc/cron.daily/mdadm.distrib'

    Started: 11:31:00.963142

    Duration: 53.099 ms

    Changes:

    ----------

    ID: configure_default_mdadm

    Function: file.managed

    Name: /etc/default/mdadm

    Result: True

    Comment: File /etc/default/mdadm is in the correct state

    Started: 11:31:01.017287

    Duration: 367.283 ms

    Changes:

    ----------

    ID: configure_mdadm_conf

    Function: file.managed

    Name: /etc/mdadm/mdadm.conf

    Result: True

    Comment: File /etc/mdadm/mdadm.conf updated

    Started: 11:31:01.384802

    Duration: 303.666 ms

    Changes:

    ----------

    diff:

    ---

    +++

    @@ -23,5 +23,3 @@

    MAILFROM root

    # definitions of existing MD arrays

    -ARRAY /dev/md0 metadata=1.2 name=KABLAMO:BOOM UUID=7b908306:a889315a:e1441a00:427c3fb2

    -ARRAY /dev/md1 metadata=1.2 name=kablamo:1 UUID=f89894f4:351cf80c:5ee2c9b8:ea3df8ff

    ----------

    ID: mdadm_save_config

    Function: cmd.run

    Name: mdadm --detail --scan >> /etc/mdadm/mdadm.conf

    Result: True

    Comment: Command "mdadm --detail --scan >> /etc/mdadm/mdadm.conf" run

    Started: 11:31:01.688698

    Duration: 37.68 ms

    Changes:

    ----------

    pid:

    2481350

    retcode:

    0

    stderr:

    stdout:


    Summary for debian

    ------------

    Succeeded: 7 (changed=3)

    Failed: 0

    ------------

    Total states run: 7

    Total run time: 31.261 s

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!