Remote Share ( OVM 2 ) versus Remote Mount ( OMV 3 )

  • To backup the data files on a Windows Server, in OMV 2.2.6, I created a series of remote shares using the remote share plugin.


    As part of each remote share, I created a directory on the boot flash card that appeared to operate as a pass-through symlink. (It's pretty obvious that the actual contents of the all the remote shares were not resident on an 8GB card.)


    In this scenario, under <Storage>, <Filesystems> and in fstab, all I see are locally connected drives and their partitions.
    _______________________________________________________________________


    To do the same thing in 3.0.68 Erasmus, I have locally connected drives and partitions in <Storage>, <Filesystems>. That's to be expected. However, all remote mounts defined in the remote mount plugin also appear as local file systems with stat's in <Storage>, <File Systems> and are defined in fstab. Adding to that, all the remote mounts appear as individual disks under <System Information>, <Disk usage>.


    This, in itself, is not a problem. However, I tried to unmount a real local, unreferenced and unused data drive. Since all the remote mounts seem to be refreshing on a continual basis (there are nine of them) the <unmount> button for the data drive in <Storage>, <File Systems> is greyed out most of the time. When it becomes available for a few seconds, I clicked on it and got an error message. The drive wouldn't unmount.


    Finally, I physically disconnect it and rebooted. At that point, it was no longer mounted, but it showed up as "missing". I deleted it in <Storage>, <File Systems> and, while the dialog box returns no errors, it didn't go away. I even removed the drive's definition in /etc/fstab, rebooted, and it still shows up as "missing" in the GUI. This is more of a curiosity thing to me, that's not really important, but it would be nice to be able to purge an unreferenced, wiped, missing drive.


    Also having several remote mounts appear as if they're local file systems, constantly updating with statistic's, doesn't seem to be ideal. OMV 2.0's remote share seemed to be a more elegant solution. (On the other hand, this is just an opinion based on my very narrow use of the two plug-in's.)


    Thanks

  • Also having several remote mounts appear as if they're local file systems, constantly updating with statistic's, doesn't seem to be ideal. OMV 2.0's remote share seemed to be a more elegant solution. (On the other hand, this is just an opinion based on my very narrow use of the two plug-in's.)

    Did you update to the latest remotemount release? In reality, the remote file systems are mounted exactly the same in both plugins. The difference is remoteshare mounts to an existing shared folder path (which may have data in it) and remotemount creates a mountpoint that you can create shared folders from and shows up in the filesystems tab. The reason it is slow for you is because you have so many remote mounts on an RPi which is slow and has slow networking.


    I wrote both plugins and I can say remoteshare is terrible. Remotemount can be improved but your issue won't go away due to speed of the device.

    omv 5.6.0 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.5.3
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Did you update to the latest remotemount release?

    According the plugin's page, I have remote-mount 3.0.8 installed. I'm assuming, when I update, I'm getting the latest patches and plug-ins. That was yesterday.


    Quote from ryecoaaron

    In reality, the remote file systems are mounted exactly the same in both plugins. The difference is remoteshare mounts to an existing shared folder path (which may have data in it) and remotemount creates a mountpoint that you can create shared folders from and shows up in the filesystems tab.

    For remote share:
    I created empty folders on the boot SD card, for each of the shares, as a tie into the remote share. That allowed me to run a "Local" Rsync job with the remote as the source, with a destination on the local data drive. With no remote Rsync server required, it seemed to be the logical thing to do.


    I do have to say, from a resource point of view, remote-share consumes much less than remote-mount.


    Quote from ryecoaaron

    The reason it is slow for you is because you have so many remote mounts on an RPi which is slow and has slow networking

    Are you suggesting that I need a performance upgrade from a R-PI 2 to, maybe, an R-PI 3? That's another $29!!! 8o
    Just kidding. I know what you're getting at. Even the slowest PC can run rings around an Arm processor. However, power consumption (24x7), size, and other costs go up.


    You know it's funny, having been involved in networking since the 10Base5 vampire tap days, I never thought I'd ever think of 100MBS FD as slow.
    ___________________


    There's got be a way around the performance issue. Maybe I can install a "read only" backup user at the top of the remote server at <ServerFolders> so I can do a single remote-mount and Rsync the sub-dir's off of it.


    As constructive feedback for Remote-Mount, if it's possible, how about not refreshing filesystem stat's data as often?


    In any case, thanks for your time and the explanation.

  • According the plugin's page, I have remote-mount 3.0.8 installed. I'm assuming, when I update, I'm getting the latest patches and plug-ins. That was yesterday.

    The plugin page shows that the plugin is installed and the latest version. Unfortunately, that doesn't mean the latest version is installed. dpkg -l | grep openm will tell you for sure since the update was released yesterday.



    I do have to say, from a resource point of view, remote-share consumes much less than remote-mount.

    Only when you have the web interface open to the filesystems tab. Any other time, it is the exact same.


    However, power consumption (24x7), size, and other costs go up.

    That is changing. Take a look at the Intel Braswell boards. They will run circles around the RPi's cpu while only using about two more watts, is full x86, has a sata port, has an m.2 slot, gigabit ethernet, and usb3 :) I waiting to get my Udoo x86 using a n3160 braswell cpu. I will be able to give you exact numbers then.


    You know it's funny, having been involved in networking since the 10Base5 vampire tap days, I never thought I'd ever think of 100MBS FD as slow.

    My first network was a base2 Lantastic network running at 1mbps on all dos machines. When I finally upgraded to 10base2, i thought it was super fast. Now, I really want 10gig networking since 1gig takes forever :)

    omv 5.6.0 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.5.3
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Using , dpkg -l | grep openm
    I do have remotemount 3.0.8. There is a 3.0.69 OMV version update available, but I'm backing up (actually as I'm writing this) before doing that upgrade.


    Since you say the performance between the two plugin's is the same (remoteshare versus remotemount), outside of the WEB GUI, that's good enough for me. In my scenario, all the PI is doing is maintaining changes to a lot of files. Once the initial sync is done, it maintains a few changes here and there. While I do have a 1Gig network, I spread schedule the PI's Rsync jobs, after hours, over a calendar week so 100M FD is fine.


    What's amazing is how well UrBackup Client backup jobs are working on an R-PI. Again, the initial client "full image" is slow, even when filtering local data files. However, after hours and in the background, who cares how long it takes?
    (I'll be conducting a couple of bare metal client recovery tests soon, to see what that's like and how Urbackup disaster recovery works before it's needed.)
    _________________________________________


    On the networking thing:


    Believe it or not, 10Base5 is older than 10Base2. Check it out. -> https://en.wikipedia.org/wiki/10BASE5


    While it obviously wasn't the start of my career, I configured and installed Cisco 6509's, with router blades, back when IOS 11 was standard. (These were among the first, so called, "Switching Routers".) IOS 11 didn't have the switch fabric abstracted into the IOS, very well, so it was possible to see the ATM switch Cisco was using on the 6509's back plane. (So much for proprietary design. :- )
    1GB wire-speed interfaces was, well, Star-Trek stuff back then, running on 240VAC power supplies. Now, 1Gig, 8 port (unmanaged) switches are $29 on sale, with "green" low power consumption. Times change.
    _________________________________________


    Wow, the spec's are definitely good on the Udoo. But it is a bit pricey compared to a PI 3. Here's to hoping the cost comes down quickly. I hope you'll share a little brief on what you find.


    Oh and thanks for what you doing, especially for us NOOB's ?( who think we know what we're doing, but really don't.

  • Believe it or not, 10Base5 is older than 10Base2. Check it out.

    I had never even heard of 10Base5 :) Learned something new today. Most people I talk to have never heard of 10Base2 :D

    omv 5.6.0 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.5.3
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • 10Base5, "Thicknet" goes back to the days of ARPANET (the Internet's Daddy), along with MicroVAX's and message routing by 4 to 7 letter ASCII text symbols. That stuff was in and operating in a Data Center I used to work in.
    I didn't install it or even repair it. It was extremely reliable and long lived which was good because, even back then, parts weren't available anymore. What we had on the shelf was "it".


    The most advanced equipment (10base2 and a bit of 10baseT),on the corporate LAN and in the server farm, was hub based. The collision domains were so large, on some segments, they were experiencing what I used to call "net lock". (Fatal collision storms.) It was my job to bring the networks for the corporate LAN and the server farm out of dark ages.


    So, having dealt with a bunch of NOOB's myself, I have a sincere appreciation for what you're doing for the community. Thanks.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!