/srv/dev-disk....is not a mountpoint

  • After upgrade from 3.x to 4.x one filesystem of my nas is not mounted.
    This is a part of the syslog:


    May 13 21:36:38 nas monit[898]: 'mountpoint_srv_dev-disk-by-label-NasPool' status failed (1) -- /srv/dev-disk-by-label-NasPool is not a mountpoint
    May 13 21:37:00 nas systemd[1]: dev-disk-by\x2dlabel-NasPool.device: Job dev-disk-by\x2dlabel-NasPool.device/start timed out.
    May 13 21:37:00 nas systemd[1]: Timed out waiting for device dev-disk-by\x2dlabel-NasPool.device.
    May 13 21:37:00 nas systemd[1]: Dependency failed for /srv/dev-disk-by-label-NasPool.
    May 13 21:37:00 nas systemd[1]: srv-dev\x2ddisk\x2dby\x2dlabel\x2dNasPool.mount: Job srv-dev\x2ddisk\x2dby\x2dlabel\x2dNasPool.mount/start failed with result 'dependency'.
    May 13 21:37:00 nas systemd[1]: Dependency failed for File System Check on /dev/disk/by-label/NasPool.
    May 13 21:37:00 nas systemd[1]: systemd-fsck@dev-disk-by\x2dlabel-NasPool.service: Job systemd-fsck@dev-disk-by\x2dlabel-NasPool.service/start failed with result 'dependency'.
    May 13 21:37:00 nas systemd[1]: Startup finished in 6.246s (kernel) + 3min 3.926s (userspace) = 3min 10.172s.
    May 13 21:37:00 nas systemd[1]: dev-disk-by\x2dlabel-NasPool.device: Job dev-disk-by\x2dlabel-NasPool.device/start failed with result 'timeout'.
    May 13 21:37:08 nas monit[898]: 'mountpoint_srv_dev-disk-by-label-NasPool' status failed (1) -- /srv/dev-disk-by-label-NasPool is not a mountpoint
    May 13 21:37:38 nas monit[898]: 'mountpoint_srv_dev-disk-by-label-NasPool' status failed (1) -- /srv/dev-disk-by-label-NasPool is not a mountpoint
    May 13 21:38:08 nas monit[898]: 'mountpoint_srv_dev-disk-by-label-NasPool' status failed (1) -- /srv/dev-disk-by-label-NasPool is not a mountpoint
    May 13 21:38:38 nas monit[898]: 'mountpoint_srv_dev-disk-by-label-NasPool' status failed (1) -- /srv/dev-disk-by-label-NasPool is not a mountpoint


    Any suggestion?
    Thank you in advance

  • Probably you need to give some more information about your system.

    Sorry!


    I am running OMV3 on an HP Proliant System. I boot OMV from an SSD and currently have two HDD's in mirror config.


    Decided to update to OMV4 today - thought I'd read how to do it properly. Uninstalled all my plugins, ran the upload and thought it went fine.


    On reboot, I can see this:





    My concern is the missing filesystem, If I'm honest, I'm not entirely sure it's not something to do with the naming - I don't remember the "disk/by-label" bit from before I upgraded. But I could be wrong!


    What I see in the System Logs is this:




    I do not know why it is not a mountpoint now. I am unable to unmount and remount the system. Any suggestions would be gratefully received!


    Thanks

  • Some more info:


    I had the drives set up with software raid within the HP Proliant set up and the SSD as the boot drive. After upgrading to OMV, all these associations had been lost within the Proliant system, so I had reallocate the SSD for boot, which worked fine. I also had to recreate the Software RAID. I don't know if that's done something...

  • Hello,


    I have the same problem after activate the NFS plugin and share a folder. The HDD that contains the shared folder have the same problem than you.


    Here is a part of syslog :

    After some research, it seems to bo be this ticket but I don't know how to force NFS3 to resolve the problem...


    I try to restore a backup of omv that I made a few days ago, but.... The backup didn't worked ;( . I think I have to learn how the omv-bachup plugin works....


    I delete the shared folder (in NFS) and desactivate NFS, restart the NAS but error is still alive!


    If somebody can help us, thanks a lot.


    PS: sorry for my bad english laguage, I'm not used to post on english forum ^^

    French nooby User. Sorry for my English language mistakes, I'm not used to post on english forum

    • MB: Asrock QC5000m microATX with AMD 5050 APU / RAM : 16 gb HyperX, Case: Fractal Design Node 804
    • Storage : Kingston SSD 128go (for OMV) / 1x4Tb Seagate IronWolf, 1x4Tb WD Red, 1x200 Go Maxtor and 1x230Go Maxtor (old devices)
    • Docker CE (not the OMV plugin) : managed by Portainer , run JDownloader2, TeamSpeak3, NextCloud 20 etc
    • Locate / MySQL (db kodi, nextcloud) / SMB_CIFS
    • Offizieller Beitrag

    Add nfsver=3 to the options box in your remotemount setup.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you for fast reply Aaron!


    Done and... Have this error:


    Seems to be the NFSVER=3 ?



    Bilder

    French nooby User. Sorry for my English language mistakes, I'm not used to post on english forum

    • MB: Asrock QC5000m microATX with AMD 5050 APU / RAM : 16 gb HyperX, Case: Fractal Design Node 804
    • Storage : Kingston SSD 128go (for OMV) / 1x4Tb Seagate IronWolf, 1x4Tb WD Red, 1x200 Go Maxtor and 1x230Go Maxtor (old devices)
    • Docker CE (not the OMV plugin) : managed by Portainer , run JDownloader2, TeamSpeak3, NextCloud 20 etc
    • Locate / MySQL (db kodi, nextcloud) / SMB_CIFS

    Einmal editiert, zuletzt von Stealthvince ()

    • Offizieller Beitrag

    Post the content of your options textbox.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Post the content of your options textbox.

    I just add nfsver=3 , subtree_check and insecure were at the origin in the textbox when I create the shared folder.


    Code
    subtree_check,insecure,nfsver=3

    Since I turn on NFS and made at first time the shared folder, my hdd which contains the shared folder "Vidéos" has disappeared...


    Look at the screenshot I made of my shared folder, when I edit the "Vidéos" shared folder, the name's device looks like an id... I don't understand. My HDD is invisible.


    Perhaps there is no relation, but HDD disappear in a very short time after turn on NFS and NFS shared folder.


    I hope I'm understandable...

    Bilder

    French nooby User. Sorry for my English language mistakes, I'm not used to post on english forum

    • MB: Asrock QC5000m microATX with AMD 5050 APU / RAM : 16 gb HyperX, Case: Fractal Design Node 804
    • Storage : Kingston SSD 128go (for OMV) / 1x4Tb Seagate IronWolf, 1x4Tb WD Red, 1x200 Go Maxtor and 1x230Go Maxtor (old devices)
    • Docker CE (not the OMV plugin) : managed by Portainer , run JDownloader2, TeamSpeak3, NextCloud 20 etc
    • Locate / MySQL (db kodi, nextcloud) / SMB_CIFS

    Einmal editiert, zuletzt von Stealthvince ()

    • Offizieller Beitrag

    I just add nfsver=3 , subtree_check and insecure were at the origin in the textbox when I create the shared folder.

    nfsver=3 is supposed to be placed in the remotemount options textbox but not the nfs share options.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • nfsver=3 is supposed to be placed in the remotemount options textbox but not the nfs share options.

    Hi, some news.
    I can't try your solution aaron, because the hdd which have the problem is dead... After physically remove it, I still had wrong log "...is not a mountpoint".


    The problem disappear one day when I did "umount -f /dev-disk-by-label...." but reappear the next day.
    So I buy a new HDD and did a clean install.


    I actually no use the NFS plugin (It was a test to make a share for the xbox kodi app, but didn't work).
    Sorry for the inconveniance

    French nooby User. Sorry for my English language mistakes, I'm not used to post on english forum

    • MB: Asrock QC5000m microATX with AMD 5050 APU / RAM : 16 gb HyperX, Case: Fractal Design Node 804
    • Storage : Kingston SSD 128go (for OMV) / 1x4Tb Seagate IronWolf, 1x4Tb WD Red, 1x200 Go Maxtor and 1x230Go Maxtor (old devices)
    • Docker CE (not the OMV plugin) : managed by Portainer , run JDownloader2, TeamSpeak3, NextCloud 20 etc
    • Locate / MySQL (db kodi, nextcloud) / SMB_CIFS
    • Offizieller Beitrag

    same problem on my system as well, after turning off the system for few days.


    Now that I have started, I have my RAID5 missing.

    There are several problems mentioned in this thread. You should probably open a new thread and add this information:
    Degraded or missing raid array questions

  • I had this problem today too. I did an update yesterday but everything was fine.


    For one hour I received mails like:


    Status failed Service mountpoint_srv_dev-disk-by-label-Casa
    Description: status failed (1) -- /srv/dev-disk-by-label-Casa is not a mountpoint


    Filesystem flags changed Service filesystem_srv_dev-disk-by-label-Casa
    Description: filesystem flags changed to 0x1008


    Status succeeded Service mountpoint_srv_dev-disk-by-label-Casa
    Description: status succeeded (0) -- /srv/dev-disk-by-label-Casa is a mountpoint


    even if I didn't edit the configuration, I was away.


    After a restart and some waiting the problem disappeared.
    Thank you,
    Riccardo

  • I got that one too on my machine.


    OMV3 clean install, I only do upgrades, never dist-upgarde, always clean. And I wait for 4 till I stop reading complaints about stuff I need for my server. And usually because im lazy. Dont touch a running server :) so I still am with OMV3


    With my OMV3 the remote-mount option is the cause for the log entry.


    I mount cifs from a only twice a day only machine. it pulls my rsnapshot backup. and the remot mount is the opposit way link. its a precaution if I loose data. I just know I could just start the backup machine and have instant access, how ever far away I am and what bad uplink I have. I could just us CLI and pull back what ever I need right now. no GUI, no Samba over VPN etc. pp. all that fancy traffic intensive stuff.


    while the machine is online it says "is a mountpoint" then when offline "is not a mountpoint"


    while the mount is active, I have an additional drive with the same name as behind /srv/....


    maybe that helps some one

  • anyone have an actual fix for this? The one right above, lol, just delete that reply.


    I did the nfsver=3 on the nfs server and disabled NFSv4 @ OMV but that didnt help. On reboot the remote mount fails and Plex doesnt work along with other served out shares.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!