After Upgrade to 4.1.23-1 my mdraid is not mounted etc

    • Resolved
    • OMV 4.x
    • Hi,

      It was the disk that had changed named.

      I.e this line in fstab :

      Source Code

      1. /dev/disk/by-label/pgnas2 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2



      The device /dev/disk/by-label/pgnas2 does not exist

      if i change it in fstab to :


      Source Code

      1. /dev/md127 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

      and do a mount -a

      i then have :


      Source Code

      1. root@pgnas2:/export/pgnas2# df -h
      2. Filesystem Size Used Avail Use% Mounted on
      3. udev 2.0G 0 2.0G 0% /dev
      4. tmpfs 395M 11M 384M 3% /run
      5. /dev/sdf1 106G 3.2G 98G 4% /
      6. tmpfs 2.0G 0 2.0G 0% /dev/shm
      7. tmpfs 5.0M 0 5.0M 0% /run/lock
      8. tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
      9. tmpfs 2.0G 0 2.0G 0% /tmp
      10. /dev/md127 8.2T 7.5T 700G 92% /export/pgnas2


      My disk mounted (last line) and i can access the files on the nas

      however , i still have the same problem in the omv gui :(

      i can still not see that filesystem mounted in the gui.

      but perhaps you can help fix that last bit?

      Br
      Patric
    • New

      Done, still the same, not listed under filesystem, and cant change permissions on the shares.

      Source Code

      1. Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl start 'sharedfolders-ssd.mount' 2>&1' with exit code '1': Assertion failed on job for sharedfolders-ssd.mount.

      is there some omv specific command to rescan the filesystem etc?

      Br
      Patric
    • New

      Hi,



      I understand the mount.

      I.e in /etc/fstab , we have two lines for this mount :


      Source Code

      1. Here it mounts it the first time :
      2. /dev/disk/by-label/pgnas2 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      3. Here it remounts it to /export/pgnas2 :
      4. /srv/dev-disk-by-label-pgnas2/pgnas /export/pgnas2 none bind,nofail 0 0
      I have also managed to fix the issue with not being able to change permissions of the shares, my ssd zfs share was not mounted, when i manually mounted this, i can now change permissions.

      So as i can tell, i have three problems left

      1 The /dev/md127 disk is not listed under filesystems in OMV GUI

      2 My ZFS pool does not seem to mount upon boot? i dont know how to make it mount automatically

      3 how do i change my fstab in OMV? so that it mounts the disk by the correct name?

      now i see that my fstab has been changed again, so fourth problem

      4 The old name for my /dev/md127 is listed under filesystems as


      see screenshot :

      [Blocked Image: https://cdn1.imggmi.com/uploads/2019/8/10/8e4f0b102f061a08bfd58f72128c8bd0-full.png]

      The post was edited 1 time, last by mrpg ().

    • New

      As I said I'm simply out of ideas, something is corrupt in your OMV setup, how, why, what, I've no idea you can edit fstab but by the look of that it is not going to fix it. The fact that your ssd does not mount automatically also points to some sort of corruption, and TBH this isn't just from an update, this could point to failing hardware. Have you had a power failure, these are the sort of things that point to your problem.

      Do you have a backup of the boot drive and your data, have you run any smart tests across the drives or have you set up smart in the GUI

      The screenshot is unreadable it's too small.
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      Hi,

      Sure it might be some corruption of omv config files.

      But yes, its definately from the update, i had no problems before, and if i do things manually, they work, pointing to just a software problem.

      i run a smart test across my drives every night, the raid is still active and healthy, so extremely unlikely its a problem with my disks.

      its an omv config/database/software issue.

      i keep it very simple, i only have one plugin added, and its the zfs plugin.

      super duper thanks for your help geaves!

      can anyone else on this community help further?

      Br
      Patric
    • New

      Please, im a a bit desperate for help! , is reinstalling the os a possibility? , i can access all my data on the omv box, so my data is ok.

      but i cannot acccess , the omv nas remotely due to the problems above.

      i do not mind re-configuring everything after a re-install, i just need to know if my data will be ok / if this is the reckommended approach to fix my issues?

      Br
      Patric

      @ryecoaaron you have helped me in the past, and seem very very knowledgable about omv, could you please have a look at my post and see if you can help?

      The post was edited 1 time, last by mrpg ().

    • New

      If you follow install instructions for OMV, you can't lose data on the data disks because they shouldn't be connected. I don't have time to reread all of these posts but we see this a lot when moving to OMV 4.x because there are existing zfs signatures on the mdadm array disks. There are a few posts about how to use wipefs to fix this.

      I would think re-importing your zfs pool would make mount on boot. Really hoping you aren't using usb disks.

      I don't know what you mean by old name for your array but the only way to change fstab and have the change be permanent is editing the mntent section of /etc/openmediavault/config.xml
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      Hi,

      Many thanks for the reply.

      before the upgrade, my mdraid disk was mounted like this by omv in fstab :

      Source Code

      1. /dev/disk/by-label/pgnas2 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      2. /srv/dev-disk-by-label-pgnas2/pgnas /export/pgnas2 none bind,nofail 0 0




      However , after the upgrade /dev/disk/by-label/pgnas2 does not exist on my system. (and its no longer listed under filesystems on omv gui)
      if i however manually change my fstab to :

      Source Code

      1. /dev/md127 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquo$
      2. /srv/dev-disk-by-label-pgnas2/pgnas /export/pgnas2 none bind,nofail 0 0

      i can mount my mdraid with mount -a

      i tried changing :

      /etc/openmediavault/config.xml

      From :


      Source Code

      1. <mntent>
      2. <uuid>2c344fc0-bd2d-4df5-982b-87c740d299ca</uuid>
      3. <fsname>/dev/disk/by-label/pgnas2</fsname>
      4. <dir>/srv/dev-disk-by-label-pgnas2</dir>
      TO


      Source Code

      1. <mntent>
      2. <uuid>2c344fc0-bd2d-4df5-982b-87c740d299ca</uuid>
      3. <fsname>/dev/md127</fsname>
      4. <dir>/srv/dev-disk-by-label-pgnas2</dir>
      and rebooted, but fstab has not changed.


      Best Regards
      Patric
    • New

      mrpg wrote:

      However , after the upgrade /dev/disk/by-label/pgnas2 does not exist on my system. (and its no longer listed under filesystems on omv gui)
      This is probably because your raid array is in a bad state. Therefore the filesystem on that array doesn't properly exist which means the device you listed doesn't exist. You need to get your array out of auto-read-only mode so the filesystem shows up. Changing fstab or the mntent entry is not the right way to go.
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      Hi,

      I am not sure the raid is in a bad state?

      i.e i have manually changed fstab now, the md127 disk is mounted i RW , i can read files, modify files and create new files on it.

      Source Code

      1. root@pgnas2:~# df -h
      2. Filesystem Size Used Avail Use% Mounted on
      3. udev 2.0G 0 2.0G 0% /dev
      4. tmpfs 395M 5.9M 389M 2% /run
      5. /dev/sdf1 106G 2.8G 98G 3% /
      6. tmpfs 2.0G 0 2.0G 0% /dev/shm
      7. tmpfs 5.0M 0 5.0M 0% /run/lock
      8. tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
      9. tmpfs 2.0G 0 2.0G 0% /tmp
      10. ssd 721G 649G 73G 90% /ssd
      11. /dev/md127 8.2T 7.5T 700G 92% /export/pgnas2
      Display All


      Source Code

      1. root@pgnas2:/# mdadm --detail /dev/md127
      2. /dev/md127:
      3. Version : 1.2
      4. Creation Time : Fri Jan 13 16:11:21 2017
      5. Raid Level : raid5
      6. Array Size : 8790405120 (8383.18 GiB 9001.37 GB)
      7. Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
      8. Raid Devices : 4
      9. Total Devices : 4
      10. Persistence : Superblock is persistent
      11. Intent Bitmap : Internal
      12. Update Time : Mon Aug 12 19:30:47 2019
      13. State : clean
      14. Active Devices : 4
      15. Working Devices : 4
      16. Failed Devices : 0
      17. Spare Devices : 0
      18. Layout : left-symmetric
      19. Chunk Size : 512K
      20. Name : pgnas2:Raid5 (local to host pgnas2)
      21. UUID : a4eaac6d:09a7678a:a41039f5:45fc8a88
      22. Events : 98721
      23. Number Major Minor RaidDevice State
      24. 0 8 0 0 active sync /dev/sda
      25. 1 8 32 1 active sync /dev/sdc
      26. 2 8 48 2 active sync /dev/sdd
      27. 3 8 64 3 active sync /dev/sde
      Display All
    • New

      mrpg wrote:

      I am not sure the raid is in a bad state?
      In your first post it was. I didn't see the output of cat /proc/mdstat anywhere else. So, after reboot, the array comes up fine?
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      Hi,

      No, im afraid not.

      i have to manually change /etc/fstab from the device that no longer exixts, i.e :

      From :

      Source Code

      1. /dev/disk/by-label/pgnas2 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      2. /srv/dev-disk-by-label-pgnas2/pgnas /export/pgnas2 none bind,nofail 0 0

      To :

      Source Code

      1. /dev/md127 /srv/dev-disk-by-label-pgnas2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquo$
      2. /srv/dev-disk-by-label-pgnas2/pgnas /export/pgnas2 none bind,nofail 0 0

      after that i can access the files locally in the nas box.

      Br
      Patric
    • New

      mrpg wrote:

      No, im afraid not.
      I meant the array not the filesystem on the array. Something is up with the filesystem/label and udev isn't populating the filesystem in /dev/disk/by-label. This is why I said you should fix the problem instead of the fstab entry. Is there any weird entries in dmesg?
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      Hi ok,

      just rebooted, and yes the array seems to be there after reboot :


      Source Code

      1. root@pgnas2:~# uptime
      2. 19:57:41 up 1 min, 1 user, load average: 0.26, 0.19, 0.08
      3. root@pgnas2:~# cat /proc/mdstat
      4. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      5. md127 : active raid5 sde[3] sdd[2] sda[0] sdc[1]
      6. 8790405120 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      7. bitmap: 0/22 pages [0KB], 65536KB chunk
      8. unused devices: <none>
      9. root@pgnas2:~# mdadm --detail /dev/md127
      10. /dev/md127:
      11. Version : 1.2
      12. Creation Time : Fri Jan 13 16:11:21 2017
      13. Raid Level : raid5
      14. Array Size : 8790405120 (8383.18 GiB 9001.37 GB)
      15. Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
      16. Raid Devices : 4
      17. Total Devices : 4
      18. Persistence : Superblock is persistent
      19. Intent Bitmap : Internal
      20. Update Time : Mon Aug 12 19:58:16 2019
      21. State : clean
      22. Active Devices : 4
      23. Working Devices : 4
      24. Failed Devices : 0
      25. Spare Devices : 0
      26. Layout : left-symmetric
      27. Chunk Size : 512K
      28. Name : pgnas2:Raid5 (local to host pgnas2)
      29. UUID : a4eaac6d:09a7678a:a41039f5:45fc8a88
      30. Events : 98723
      31. Number Major Minor RaidDevice State
      32. 0 8 0 0 active sync /dev/sda
      33. 1 8 32 1 active sync /dev/sdc
      34. 2 8 48 2 active sync /dev/sdd
      35. 3 8 64 3 active sync /dev/sde
      Display All



      but like you/me said no filesystem mounted.

      paste of my dmesg :
      termbin.com/ujcj

      Br
      Patric
    • New

      what is the output of (as root):

      ls -al /dev/disk/by-label
      systemctl restart systemd-udev-trigger.service
      ls -al /dev/disk/by-label
      blkid
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      Hi,

      Sorry for slow response.


      Source Code

      1. root@pgnas2:~# ls -al /dev/disk/by-label
      2. total 0
      3. drwxr-xr-x 2 root root 60 Aug 12 19:56 .
      4. drwxr-xr-x 8 root root 160 Aug 12 19:56 ..
      5. lrwxrwxrwx 1 root root 10 Aug 12 19:56 ssd -> ../../sdb





      Source Code

      1. root@pgnas2:~# systemctl restart systemd-udev-trigger.service
      2. root@pgnas2:~#

      Source Code

      1. root@pgnas2:~# ls -al /dev/disk/by-label
      2. total 0
      3. drwxr-xr-x 2 root root 60 Aug 12 19:56 .
      4. drwxr-xr-x 8 root root 160 Aug 12 19:56 ..
      5. lrwxrwxrwx 1 root root 10 Aug 13 15:35 ssd -> ../../sdb1

      Source Code

      1. /dev/sda: UUID="a4eaac6d-09a7-678a-a410-39f545fc8a88" UUID_SUB="829814f6-afe7-cba4-e318-24feaf4234f0" LABEL="pgnas2:Raid5" TYPE="linux_raid_member"
      2. /dev/sdb1: LABEL="ssd" UUID="16055115822611016235" UUID_SUB="16333854279340365342" TYPE="zfs_member" PARTLABEL="zfs-cfaad19e7bf04ac2" PARTUUID="ffcf088a-ea6d-d643-8c7c-6a78625f922c"
      3. /dev/sde: UUID="a4eaac6d-09a7-678a-a410-39f545fc8a88" UUID_SUB="1f0a97e5-d8b2-7055-daea-5ad2158b81a1" LABEL="pgnas2:Raid5" TYPE="linux_raid_member"
      4. /dev/sdc: UUID="a4eaac6d-09a7-678a-a410-39f545fc8a88" UUID_SUB="d994823b-0bdb-77f3-8536-91fd896168ff" LABEL="pgnas2:Raid5" TYPE="linux_raid_member"
      5. /dev/sdd: UUID="a4eaac6d-09a7-678a-a410-39f545fc8a88" UUID_SUB="754525e4-e931-00ab-35ad-05355e77b4ad" LABEL="pgnas2:Raid5" TYPE="linux_raid_member"
      6. /dev/sdf1: UUID="f53de5f0-6b4f-4e67-9d08-2267ef9542f1" TYPE="ext4" PARTUUID="f6eb0bdb-01"
      7. /dev/sdf5: UUID="db997024-abda-4a75-984e-f8a82430d819" TYPE="swap" PARTUUID="f6eb0bdb-05"
      8. /dev/sdb9: PARTUUID="b56e0178-1205-694d-baae-07356e2e412b"
    • New

      Your array's filesystem is not showing up in blkid. This is why udev is not populating the /dev/disk entries. What is the output of (as root):

      partprobe -s
      blkid /dev/md127
      omv 4.1.23 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Users Online 2

      1 Member and 1 Guest