adding new drive - Mounted, can find it under /srv but doesn't show up in Snapraid or Shared Folders drop down

  • As the subject mentions, I installed a new drive, created a new file system (ext4), mounted it, all via the GUI. I can access it when I SSH into the install but the drive doesn't show up in the shared folders or snapraid drop down. I want to add it as an additional parity drive.


    I did get the below error when mounting it but it seems related to the mergefs file system created and nothing to do with the new drive which is /srv/dev-disk-by-label-Parity1


    Anyone has any ideas? Thanks


    file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_9ff94b2d-2899-4b76-bf70-51d7a377d751 for file /etc/fstab was charged by text Started: 22:16:37.388216 Duration: 0.572 ms Changes: ---------- ID: mount_filesystem_mountpoint_9ff94b2d-2899-4b76-bf70-51d7a377d751 Function: mount.mounted Name: /srv/dev-disk-by-label-Parity2 Result: True Comment: Target was already mounted Started: 22:16:37.388852 Duration: 39.298 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_6ba42e94-adb6-4845-b3b1-ba162972280a Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_6ba42e94-adb6-4845-b3b1-ba162972280a for file /etc/fstab was charged by text Started: 22:16:37.428281 Duration: 0.621 ms Changes: ---------- ID: mount_filesystem_mountpoint_6ba42e94-adb6-4845-b3b1-ba162972280a Function: mount.mounted Name: /srv/dev-disk-by-label-Parity1 Result: True Comment: Target was already mounted Started: 22:16:37.428965 Duration: 38.899 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_unionfilesystem_mountpoint_2abfed19-0612-4158-832c-5df513c28671 Function: file.accumulated Result: True Comment: Accumulator create_unionfilesystem_mountpoint_2abfed19-0612-4158-832c-5df513c28671 for file /etc/fstab was charged by text Started: 22:16:37.467998 Duration: 0.568 ms Changes: ---------- ID: mount_filesystem_mountpoint_2abfed19-0612-4158-832c-5df513c28671 Function: mount.mounted Name: /srv/dad6d43e-9384-4139-871f-e08aadb6cb04 Result: False Comment: Unable to unmount /srv/dad6d43e-9384-4139-871f-e08aadb6cb04: umount: /srv/dad6d43e-9384-4139-871f-e08aadb6cb04: target is busy.. Started: 22:16:37.468628 Duration: 25.446 ms Changes: ---------- umount: Forced unmount and mount because options (direct_io) changed ---------- ID: append_fstab_entries Function: file.blockreplace Name: /etc/fstab Result: True Comment: No changes needed to be made Started: 22:16:37.496094 Duration: 2.014 ms Changes: Summary for OMVtest ------------- Succeeded: 18 (changed=9) Failed: 1 ------------- Total states run: 19 Total run time: 556.664 ms

  • My main goal is trying to add this new drive as a parity drive in snapraid. In addition to mounting the drive do I need to do any additional steps in order to see the drive appear in the drop-down list of the snapraid plugin?


    I don't remember having to do anything else when I first set up snapraid.


    Thanks.

  • In addition to mounting the drive do I need to do any additional steps in order to see the drive appear in the drop-down list of the snapraid plugin?

    The snapraid drive dropdown uses the same code as the shared folder dropdown. And if you get an error mounting, it won't be setup correctly. I would reboot and try applying again.

    omv 5.5.17-2 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Now I unmounted and deleted the file system. At reboot it hangs for a while as in the boot sequence it searches for the file system I have just deleted (a start job is running for .....) . It looks like the delete is not being correctly applied. At no point during the unmount or deletion was I asked to save the changes in the GUI.


    I guess this means that I have an issue with my omv config file somewhere?

  • It looks like the delete is not being correctly applied. At no point during the unmount or deletion was I asked to save the changes in the GUI.

    What are you deleting? Something from the Filesystem tab or the mergerfs pool? If it is mergerfs, it applies the changes right away.

    omv 5.5.17-2 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • So I had a file system (ext4) on my new drive /dev/sdc1 under the tab File Systems. When I tried to use it for my parity drive it doesn't show up in the SnapRAID plug-in. I unmounted it, deleted it (all under the GUI) and then rebooted the system. During the boot up script it hangs for 1m30s still looking for sdc1 (a start job is running for .....) so I am assuming it has not been deleted correctly even if it doesn't show up anymore in the GUI under the File Systems tab.

  • OMV Basics 101:


    Before you try something new, make sure you have a good backup or clone the root filesystem. A backup or clone of the root filesystem when it is in a 100% perfect state without any problems or glitches at all. You should also make sure you know how to quickly and effortlessly restore it.


    If you boot from a thumbdrive or a SD card this is easy. Just remove the thumbdrive or card and create an image of it on some other computer. When needed, just restore the good image back to the the thumbdrive or SD card.


    Only try the new stuff when you have a good backup/clone. If you experience problems you can restore the working configuration and try again, until you get OMV with the new stuff working 100% perfectly as well.


    Then you backup/clone again. And perhaps try the next thing. Systematic. Steady. Never step backwards, always try to reach further. Always progress based on an absolutely perfect and pristine OMV install without any problems at all. Shining! Humming softly!


    OMV doesn't protect you from yourself. You are perfectly free to extend foot and shoot self in same. And repeat as long as you have more ammo and feet. But do you REALLY want to keep shooting yourself in the foot, again and again?


    WARNING!!!


    If you try new stuff without having an easy way to restore OMV back to a good working state, you WILL be severely punished!!! The severe, cruel and unusual punishment will be having to do a fresh reinstall of OMV from scratch. AGAIN! You WILL continue to receive this punishment, again and again, until you learn better. If you try to avoid the punishment you may randomly receive an even more severe punishment: Long-lasting low-level continuous torture from having to use a OMV NAS that is not 100% perfectly configured and not working absolutely correctly.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • Well still trouble shooting as I would like to understand why it is not working. What I saw is that the new drive never gets added in the config.xml which is strange as I can access it through CLI under /srv/ just cannot get it to be referenced and be used as a shared or parity drive.


    The backup idea is not so bad, you could have said it in less words ;)

  • I have the same problem.


    I added a new drive with 10 tb with volume name WDWHITE04 and this drive is shown in "file systems" :

    Dateisysteme.png


    The new disk does not appear in the drop down of the SnapRAID Plugin.


    The other disks are shown and I added them as data disks :


    SnapRAID.png


    Reboot has no effect.

    Is it a problem that the partition "SWAP" is not mounted ? Its not mounted by default after installation.

    Stay healthy Bernd

  • revise that sda are not your boot disk when you install OMV first time.


    if this is the case the disk are marked as boot disk erroneuosly because now sde is your boot disk.



    not sure how to solve, but is discussed on forums.

  • that is.



    your sda, sdb, sdc order are changed.


    I can't find the apropiate post to help you , but i am sure that this error is widely commented.

  • Now I tried "unmount" and "mount", then I geht the error ;



    Code
    Die Konfiguration wurde geändert. Sie müssen die Änderungen bestätigen damit sie wirksam werden.
    Seven-C-NAS
    Seite
    1
    von 1
    1 - 8 von 8 angezeigt
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function) debian: ---------- ID: create_filesystem_mountpoint_bc312397-139e-418a-8147-997f094485ae Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_bc312397-139e-418a-8147-997f094485ae for file /etc/fstab was charged by text Started: 11:24:51.979485 Duration: 0.397 ms Changes: ---------- ID: mount_filesystem_mountpoint_bc312397-139e-418a-8147-997f094485ae Function: mount.mounted Name: /srv/dev-disk-by-label-WDWHITE01 Result: True Comment: Target was already mounted Started: 11:24:51.980206 Duration: 78.969 ms Changes: ---------- ID: create_filesystem_mountpoint_a172cfaf-000c-4393-a243-9ffbeef9004d Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_a172cfaf-000c-4393-a243-9ffbeef9004d for file /etc/fstab was charged by text Started: 11:24:52.059354 Duration: 0.784 ms Changes: ---------- ID: mount_filesystem_mountpoint_a172cfaf-000c-4393-a243-9ffbeef9004d Function: mount.mounted Name: /srv/dev-disk-by-label-WDWHITE02 Result: True Comment: Target was already mounted Started: 11:24:52.060235 Duration: 9.847 ms Changes: ---------- ID: create_filesystem_mountpoint_38c301f8-f557-49c6-a0ff-51bfb7a6e79e Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_38c301f8-f557-49c6-a0ff-51bfb7a6e79e for file /etc/fstab was charged by text Started: 11:24:52.070231 Duration: 0.647 ms Changes: ---------- ID: mount_filesystem_mountpoint_38c301f8-f557-49c6-a0ff-51bfb7a6e79e Function: mount.mounted Name: /srv/dev-disk-by-label-WDWHITE03 Result: True Comment: Target was already mounted Started: 11:24:52.070953 Duration: 10.265 ms Changes: ---------- ID: create_filesystem_mountpoint_c108634d-c7ed-4f17-b8fe-f9e502637968 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_c108634d-c7ed-4f17-b8fe-f9e502637968 for file /etc/fstab was charged by text Started: 11:24:52.081392 Duration: 0.763 ms Changes: ---------- ID: mount_filesystem_mountpoint_c108634d-c7ed-4f17-b8fe-f9e502637968 Function: mount.mounted Name: /srv/dev-disk-by-label-WDRED01 Result: True Comment: Target was already mounted Started: 11:24:52.082245 Duration: 11.791 ms Changes: ---------- ID: create_filesystem_mountpoint_8f5e32b8-cf29-4c80-94ee-c805773df256 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_8f5e32b8-cf29-4c80-94ee-c805773df256 for file /etc/fstab was charged by text Started: 11:24:52.094213 Duration: 0.776 ms Changes: ---------- ID: mount_filesystem_mountpoint_8f5e32b8-cf29-4c80-94ee-c805773df256 Function: mount.mounted Name: /srv/dev-disk-by-label-WDWHITE04 Result: True Comment: Target was successfully mounted Started: 11:24:52.095085 Duration: 82.641 ms Changes: ---------- mount: True ---------- ID: create_unionfilesystem_mountpoint_8faca333-c2ef-4fec-ae1a-ab5d929abefa Function: file.accumulated Result: True Comment: Accumulator create_unionfilesystem_mountpoint_8faca333-c2ef-4fec-ae1a-ab5d929abefa for file /etc/fstab was charged by text Started: 11:24:52.178052 Duration: 1.143 ms Changes: ---------- ID: mount_filesystem_mountpoint_8faca333-c2ef-4fec-ae1a-ab5d929abefa Function: mount.mounted Name: /srv/77df64f8-a5fc-4f19-87d7-f826c2351251 Result: False Comment: Unable to unmount /srv/77df64f8-a5fc-4f19-87d7-f826c2351251: umount: /srv/77df64f8-a5fc-4f19-87d7-f826c2351251: target is busy.. Started: 11:24:52.179342 Duration: 22.199 ms Changes: ---------- umount: Forced unmount and mount because options (cache.files=off) changed ---------- ID: create_bind_mountpoint_e54b426c-de85-4720-84ba-6fa717f2f512 Function: file.accumulated Result: True Comment: Accumulator create_bind_mountpoint_e54b426c-de85-4720-84ba-6fa717f2f512 for file /etc/fstab was charged by text Started: 11:24:52.201760 Duration: 0.77 ms Changes: ---------- ID: mount_bind_mountpoint_e54b426c-de85-4720-84ba-6fa717f2f512 Function: mount.mounted Name: /export/Movies Result: True Comment: Target was already mounted Started: 11:24:52.202618 Duration: 7.745 ms Changes: ---------- ID: append_fstab_entries Function: file.blockreplace Name: /etc/fstab Result: True Comment: No changes needed to be made Started: 11:24:52.212310 Duration: 2.157 ms Changes: Summary for debian ------------- Succeeded: 14 (changed=2) Failed: 1 ------------- Total states run: 15 Total run time: 230.894 ms
  • And


  • I did a reboot. Than another error was shown. Another reboot. Now I can add the new drive as parity and a sync is running !

    Thank you !


    Is it ok that the data disk is with directory content and the parity disk without ?


    Stay healthy Bernd

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!