unable to mount drives

  • I am hoping someone can help me out, my OMV drive crashed and I had to reinstall OMV, after reinstalling all the disks are showing up under disks but nothing is showing up under file system so I am unable to mount the file system. is there maybe a step I am missing?

    thanks in advance

  • i am tracking that but the button is grayed out, i am assuming this is because the file system is no showing up in this section just the os drive. The individual drives do show up under the s.m.a.r.t section and under the disks section.

  • I have never seen that grayed out myself. If the drives show in the drives list and smart, the only reason I could see you not being able to mount them is if there is no filesystem on them or the filesystem is not compatible with OMV.

    Were these mounted before as single drives?

    What was the filesystem ued on them? If it was something like ZFS that needs to be installed.

  • there were never mounted as single drives, before my os (OMV5) disk crashed everything worked great. I believe the file system was ext4 (which i think is the default, definitely not ZFZ. i am assuming that the old file system should be showing up under the "devices" column?

  • If they were not single drives, how were they configured? mdadm raid, mergefs , etc?

    mdadm raids easily remount with that little button that you say is greyed out. I have no experience with mergefs to guide you there.

    If you are not sure on how they were formatted and configured, you can try looking at the output of blkid and lsblk to see what you have

  • the disks are showing up as "linux raid__member" under blkid, here is one of the lines

    /dev/sdk: UUID="6d9b5107-b79e-1071-c248-36bb1d9cb2d2" UUID_SUB="23ef7ce0-7753-32e7-0edd-4d625680bb56" LABEL="openmedia vault:Media" TYPE="linux_raid_member"

    and for lsblk it shows for all the drives that are part of the filesystem


    sda 8:0 1 2.7T 0 disk

    does the RO mean read only? and does that mean the drives are marked as read only?

  • look at the line in blkid that names the RAID device. It should be an MD###. That will tell you the file system used after TYPE

    Mine looks like this as an example (xfs is the filesystem I used):

    /dev/md127: LABEL="NASR5" UUID="4d26adbd-029a-4d74-a81f-c69bf193a099" BLOCK_SIZE="4096" TYPE="xfs"

    Your lsblk should look something like this. RO is read only, but the 0 under it means "no", while a 1 would mean "yes"

    Here is a little lsblk tutorial

    Linux lsblk Command Tutorial For Beginners

    have a look at mdadm --detail /dev/md### replacing the ### with the md number of your array. This will tell you the number of drives and if all the drives are functioning.

    Verify the number of devices in the total, active, working, failed, and spare make sense, make sure superblock is there.

    It should look something like this when the array is working. Note mine is a RAID5:, Yours may be different. if you tried to do a RAID10, this could get very complicated sine you will have nested RAID levels.

    If you see any errors, it might help to post the full outputs of these commands.

    I am not the forum's resident mdadm expert, so if you have to start manually assembling or reconstructing the array with mdadm, it might be best to ask geaves . If it was my own array, I would probably try to use UFS Explorer data recovery software I have access to, to assemble it and make sure I have a copy of everything important before trying any mdadm manipulations, and then I would spend a bit of time reading researching the commands to get it right, but he may be able to direct you quicker.

  • BernH my blkid does not have a MD### number here is the full readout

    and my lsblk does not have the indent under each drive with the MD number it has the following

    • Official Post

    Reading the first post ssh into OMV and post the output of cat /proc/mdstat for some reason whilst OMV 'finds' the array after a clean install it doesn't always come up as active. Therefore no information is displayed in raid management and no filesystem is shown

    Raid is not a backup! Would you go skydiving without a parachute?

    OMV 6x amd64 running on an HP N54L Microserver

  • geaves thanks for the reply, here is the readout. I am assuming it is inactive, thanks in advance for your help, I have been beating my head against the wall for hours.

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra                               id10]
    md127 : inactive sdk[1](S) sdj[0](S) sdc[7](S) sdg[3](S) sdb[8](S) sda[9](S) sdf                               [4](S) sdh[2](S) sde[5](S) sdd[6](S)
          29263428528 blocks super 1.2

    after running this i was then able to find the MD number and then ran mdadm --detail /dev/md### this gave me the following output, for some reason it is showing the raid as raid0 but this is not the case i am running raid6

  • geaves bear with me, just to confirm do i run the command above one drive letter at a time or will that command run assemble all the drives consecutively. My CLI Linux experience is pretty limited

  • geaves so this is producing an error, maybe that disk dead? under S.M.A.R.T the status circle is gray all the other drives are green

    mdadm: looking for devices for /dev/md127
    mdadm: Cannot read superblock on /dev/sdk
    mdadm: no RAID superblock on /dev/sdk
    mdadm: /dev/sdk has no superblock - assembly aborted
  • geaves when i do that it says and the array is not showing up in filesystem, unless i need to reboot OMV to see it?

    mdadm: looking for devices for /dev/md127
    mdadm: Merging with already-assembled /dev/md/Media
    mdadm: cannot re-read metadata from /dev/sdk - aborting
  • geaves the button is grayed out, i know have 9 out of the 10 drives activated and when i click on mount (which is now not grayed out) it still won't mount

    Edited 2 times, last by Mackay: added new info ().

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!