Create Ram disk and share over SMB

    • OMV 4.x
    • Resolved
    • Create Ram disk and share over SMB

      Hi
      My OMV NAS has plenty of RAM available.
      So I wonder, is it possible to set aside a part of free RAM into a tmpfs disk and share it over SMB/CIFS?
      Just so you know, I have already achieved this on one of my ubuntu-server which is working great.

      The reason why I'm not thinking to apply that same procedure on OMV is because for e.g. fstab and samba configurations are better handled through its web admin-interface.
      Therefore, I would appreciate a similar sophisticated solution through GUI.
      Or Is there any plugin for that as I couldn't find one?

      In case, it is not possible with GUI, then how can I instruct OMV not to overwrite my manually set configs?

      Thanks
    • waqaslam wrote:

      In case, it is not possible with GUI, then how can I instruct OMV not to overwrite my manually set configs?
      It is kind of possible using the web interface but the setup would require CLI. You would need to create a directory and then add a tmpfs entry to /etc/fstab outside of the OMV tags. Then mount it and in the extra options box on the samba plugin's settings tab, you would need to add a complete samba share entry. This would make the changes persist other samba changes and it does work (I've done it for other types of shares but not tmpfs).
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Thanks for the tips.
      I added a tmpfs entry (line#16) in fstab and the file looks as below:

      Source Code

      1. root@omv-nas:~# cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sda1 during installation
      10. UUID=ee550130-a49a-46b1-ae0d-48ed181741ad / ext4 noatime,nodiratime,errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. # UUID=a1c09b15-5042-42a9-8ca4-d6315a91a6ce none swap sw 0 0
      13. tmpfs /tmp tmpfs defaults 0 0
      14. # RamDrive
      15. tmpfs /mnt/RamDrive tmpfs defaults,noatime,nosuid,size=1024m 0 0
      16. # >>> [openmediavault]
      17. /dev/disk/by-label/OMV-Data /srv/dev-disk-by-label-OMV-Data ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,discard,acl 0 2
      18. /dev/disk/by-label/WDRed /srv/dev-disk-by-label-WDRed ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      19. # <<< [openmediavault]
      Display All

      And when I df -h in console, I can see the RamDrive (line#9) mounted correctly:

      Source Code

      1. root@omv-nas:~# df -h
      2. Filesystem Size Used Avail Use% Mounted on
      3. udev 958M 0 958M 0% /dev
      4. tmpfs 195M 5.8M 189M 3% /run
      5. /dev/sdb1 15G 3.0G 11G 22% /
      6. tmpfs 972M 8.0K 972M 1% /dev/shm
      7. tmpfs 5.0M 0 5.0M 0% /run/lock
      8. tmpfs 972M 0 972M 0% /sys/fs/cgroup
      9. tmpfs 1.0G 4.0K 1.0G 1% /mnt/RamDrive
      10. tmpfs 972M 0 972M 0% /tmp
      11. /dev/sdb3 99G 116M 94G 1% /srv/dev-disk-by-label-OMV-Data
      12. /dev/sda1 2.7T 110G 2.6T 4% /srv/dev-disk-by-label-WDRed
      13. folder2ram 972M 55M 918M 6% /var/log
      14. folder2ram 972M 0 972M 0% /var/tmp
      15. folder2ram 972M 1.6M 970M 1% /var/lib/openmediavault/rrd
      16. folder2ram 972M 1.7M 970M 1% /var/spool
      17. folder2ram 972M 14M 958M 2% /var/lib/rrdcached
      18. folder2ram 972M 12K 972M 1% /var/lib/monit
      19. folder2ram 972M 4.0K 972M 1% /var/lib/php
      20. folder2ram 972M 0 972M 0% /var/lib/netatalk/CNID
      21. folder2ram 972M 456K 971M 1% /var/cache/samba
      Display All
      However, under web-admin -> File Systems I cannot find the option to mount the RamDrive. It is not listed there.

      [Blocked Image: https://imgur.com/a/I0Se9lM]

      [Blocked Image: https://imgur.com/a/VKkN5MG]

      Am I doing anything wrong at this point?

      The post was edited 1 time, last by waqaslam ().

    • tkaiser wrote:

      waqaslam wrote:

      Am I doing anything wrong at this point?
      Possibly related: but what do you want to achieve with this approach of sharing a RAM disk? OMV is about NAS so we're talking about network access. Isn't this already the bottleneck for spinning rust or is your OMV box equipped with 40GbE or better?
      I wanted to to have a storage location which is not bind to any physical disk, so that I can transact some trivial data without waking the drives.
      One use case scenario is surveillance camera video files that are constantly written to ram disk and in parallel sync on to a cloud storage (and removed afterwards).
    • Thanks @ryecoaaron, now I understand what you mean by using the extra options of SMB/CIFS.
      I've added the following in the SMB/CIFS -> Setup (Advanced settings) and it worked as expected.

      Source Code

      1. [RamDrive]
      2. comment = All the data will be erased itself once the host is restarted.
      3. path = /mnt/RamDrive
      4. browseable = yes
      5. create mask = 0775
      6. directory mask = 0775
      7. guest ok = yes
      8. read only = no
      Now I can find RamDrive as a samba share in Windows and it's working great. :)
    • Interesting use case... As @ryecoaaron explained you won't see this mountpoint anywhere in the OMV UI but need to create Samba share definition manually and then add it to the Samba module's global options. Kind of a hack but this will ensure that your added share definition will be transferred by OMV to smb.conf and doesn't get in conflict with other OMV settings.

      OK, you figured it out in the meantime yourself :)
    • Not for the @waqaslam use case with already highly compressed data but for other users stumbling across this thread... using a compressed zram device instead of tmpfs can be a great idea to store stuff that compresses good (e.g. text or log files -- they easily shrink to 1/10). But then you need some scripting to set up the zram device at boot. As an idea see this function we use on the ARM OMV images to optionally use compressed /tmp: github.com/armbian/build/blob/…bian-zram-config#L96-L114