adding new drive - Mounted, can find it under /srv but doesn't show up in Snapraid or Shared Folders drop down

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • adding new drive - Mounted, can find it under /srv but doesn't show up in Snapraid or Shared Folders drop down

      New

      As the subject mentions, I installed a new drive, created a new file system (ext4), mounted it, all via the GUI. I can access it when I SSH into the install but the drive doesn't show up in the shared folders or snapraid drop down. I want to add it as an additional parity drive.

      I did get the below error when mounting it but it seems related to the mergefs file system created and nothing to do with the new drive which is /srv/dev-disk-by-label-Parity1

      Anyone has any ideas? Thanks

      file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_9ff94b2d-2899-4b76-bf70-51d7a377d751 for file /etc/fstab was charged by text Started: 22:16:37.388216 Duration: 0.572 ms Changes: ---------- ID: mount_filesystem_mountpoint_9ff94b2d-2899-4b76-bf70-51d7a377d751 Function: mount.mounted Name: /srv/dev-disk-by-label-Parity2 Result: True Comment: Target was already mounted Started: 22:16:37.388852 Duration: 39.298 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_6ba42e94-adb6-4845-b3b1-ba162972280a Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_6ba42e94-adb6-4845-b3b1-ba162972280a for file /etc/fstab was charged by text Started: 22:16:37.428281 Duration: 0.621 ms Changes: ---------- ID: mount_filesystem_mountpoint_6ba42e94-adb6-4845-b3b1-ba162972280a Function: mount.mounted Name: /srv/dev-disk-by-label-Parity1 Result: True Comment: Target was already mounted Started: 22:16:37.428965 Duration: 38.899 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_unionfilesystem_mountpoint_2abfed19-0612-4158-832c-5df513c28671 Function: file.accumulated Result: True Comment: Accumulator create_unionfilesystem_mountpoint_2abfed19-0612-4158-832c-5df513c28671 for file /etc/fstab was charged by text Started: 22:16:37.467998 Duration: 0.568 ms Changes: ---------- ID: mount_filesystem_mountpoint_2abfed19-0612-4158-832c-5df513c28671 Function: mount.mounted Name: /srv/dad6d43e-9384-4139-871f-e08aadb6cb04 Result: False Comment: Unable to unmount /srv/dad6d43e-9384-4139-871f-e08aadb6cb04: umount: /srv/dad6d43e-9384-4139-871f-e08aadb6cb04: target is busy.. Started: 22:16:37.468628 Duration: 25.446 ms Changes: ---------- umount: Forced unmount and mount because options (direct_io) changed ---------- ID: append_fstab_entries Function: file.blockreplace Name: /etc/fstab Result: True Comment: No changes needed to be made Started: 22:16:37.496094 Duration: 2.014 ms Changes: Summary for OMVtest ------------- Succeeded: 18 (changed=9) Failed: 1 ------------- Total states run: 19 Total run time: 556.664 ms
    • New

      My main goal is trying to add this new drive as a parity drive in snapraid. In addition to mounting the drive do I need to do any additional steps in order to see the drive appear in the drop-down list of the snapraid plugin?

      I don't remember having to do anything else when I first set up snapraid.

      Thanks.
    • New

      manubu wrote:

      In addition to mounting the drive do I need to do any additional steps in order to see the drive appear in the drop-down list of the snapraid plugin?
      The snapraid drive dropdown uses the same code as the shared folder dropdown. And if you get an error mounting, it won't be setup correctly. I would reboot and try applying again.
      omv 5.2.3 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.2.0
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      Now I unmounted and deleted the file system. At reboot it hangs for a while as in the boot sequence it searches for the file system I have just deleted (a start job is running for .....) . It looks like the delete is not being correctly applied. At no point during the unmount or deletion was I asked to save the changes in the GUI.

      I guess this means that I have an issue with my omv config file somewhere?
    • New

      manubu wrote:

      It looks like the delete is not being correctly applied. At no point during the unmount or deletion was I asked to save the changes in the GUI.
      What are you deleting? Something from the Filesystem tab or the mergerfs pool? If it is mergerfs, it applies the changes right away.
      omv 5.2.3 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.2.0
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      So I had a file system (ext4) on my new drive /dev/sdc1 under the tab File Systems. When I tried to use it for my parity drive it doesn't show up in the SnapRAID plug-in. I unmounted it, deleted it (all under the GUI) and then rebooted the system. During the boot up script it hangs for 1m30s still looking for sdc1 (a start job is running for .....) so I am assuming it has not been deleted correctly even if it doesn't show up anymore in the GUI under the File Systems tab.
    • New

      OMV Basics 101:

      Before you try something new, make sure you have a good backup or clone the root filesystem. A backup or clone of the root filesystem when it is in a 100% perfect state without any problems or glitches at all. You should also make sure you know how to quickly and effortlessly restore it.

      If you boot from a thumbdrive or a SD card this is easy. Just remove the thumbdrive or card and create an image of it on some other computer. When needed, just restore the good image back to the the thumbdrive or SD card.

      Only try the new stuff when you have a good backup/clone. If you experience problems you can restore the working configuration and try again, until you get OMV with the new stuff working 100% perfectly as well.

      Then you backup/clone again. And perhaps try the next thing. Systematic. Steady. Never step backwards, always try to reach further. Always progress based on an absolutely perfect and pristine OMV install without any problems at all. Shining! Humming softly!

      OMV doesn't protect you from yourself. You are perfectly free to extend foot and shoot self in same. And repeat as long as you have more ammo and feet. But do you REALLY want to keep shooting yourself in the foot, again and again?

      WARNING!!!

      If you try new stuff without having an easy way to restore OMV back to a good working state, you WILL be severely punished!!! The severe, cruel and unusual punishment will be having to do a fresh reinstall of OMV from scratch. AGAIN! You WILL continue to receive this punishment, again and again, until you learn better. If you try to avoid the punishment you may randomly receive an even more severe punishment: Long-lasting low-level continuous torture from having to use a OMV NAS that is not 100% perfectly configured and not working absolutely correctly.
      OMV 4: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4