Several HDD under one sharefolder.

  • Hello,

    I'd like to have this hdd organization under the very same share :


    /Sharefolder Root/
    /Vol1-directory -> 100% Filesystem of physical HDD number01
    /Vol2-directory -> 100% Filesystem of physical HDD number02
    /Vol3-directory -> 100% Filesystem of physical HDD number03
    Etc….

    Do I have to use fstab to mount all HDD/Filesystem under the sharefolder/volX-directory root, or is there an OMV way to do it please ?


    Thanks.

  • Use mergerfs.


    You can configure it manually in fstab or use OMV's openmediavault-unionfilesystems plugin.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    Mergerfs can handle that. You just need to specify "Existing Path" policies. And make sure the path only exists on the right HDD.


    If mergerfs use an EP policy, then files end up on HDDs with that existing path. If only one HDD has that path, then all files end up on that HDD.

  • Just curious. Why do you want to do it like this?


    One filesystem split up between HDDs. It seems you could use one shared folder per HDD instead?

    Because I got a huge amount of disk (I DIY a 28 HDD enclosure) storing big files (+50~100BG each), and I'm use to have one share for all these files (I'm under ClearOS, a Centos like)


    But I'm open to suggestions :thumbup:

    • Offizieller Beitrag

    Using the HDDs as separate filesystems might be wasteful and require manual shuffling of files as drives fill.


    An option could be two mergerfs drive pools without EP policy. Mergerfs will handle the shuffling of data and consolidate free storage. But you lose control over what drive the files end up on.

    Perhaps just Most Free Space. MFS. Then the files will be evenly spread out on all drives which gives more even drive usage and perhaps even performance benefits. One mergerfs drive pool for backups of the other. The drawback is that drive spindown is less likely since there will be loss locality of files. Associated files in the same folder are likely to end up on different drives.


    Another option is Least Free Space. Then the drives are filled, one after another. Great to consolidate storage and easy to add a new drive when needed. More locality for associated files.


    A combination might be best. MFS for shared live data and LFS for backups.


    One reason why you might want control over what drive files end up on could be that you use some form of tiered storage. So that you have cold "archive" data that never/rarely change and thus does not have updated backups. And hot data that is often added to or updated that is backed up often. You only need to update cold backups when you move data from the hot to the cold tier. I use a tiered system for parts of my storage.


    You may want to investigate snapraid as well.

    • Offizieller Beitrag

    Nice build.


    I have ordered parts for a DIY 19" 18U rack. I intend to move most of my boxes there. Odroid HC2s, RPi4, switch, PC and a server built from salvaged old PC parts.


    Inspired by: https://tombuildsstuff.blogspo…erver-rack-plans.html?m=1


    I bought two 4U server enclosures to put my existing desktop in, as well as a OMV server built from various parts. Plenty of disk space.


    https://unykach.com/en/servers/rack-case-4u19-uk-4129-51912/

  • I like the 19" wood furniture. But I think it'll cost more than a ready made metal one...

    I check it's about 100~150€, but far less aesthetic, and with no accessories !!!


    I DIY mine with spare parts to cut cost... As I used silencing ribbon between panels, and double decoupling system for HDD, it's really silencious.


    I'm awaiting for my Xeon E5-2378 V3+32GB+New motherboard (with NVME), hoping it'll go thru customs, if possible without custom invoice, in these Covid days. 290€ s&h included for such a 12core/24thread config is a real deal, as I'm virtualizing all my system in Proxmox.


    H3d7e7b55a71346b0a31bad3066804a21k.jpg

    H82ab541b06264b049ae25f535ef7e60bI.jpg





    About LFS, it's nice but what happens when a HDD fails please ?


    You loose everything in the share (RAID0 like faillure)

    You only loose the content of the defective HDD +2 files spread accross disks N-1 / N / N+1, the rest of it stays available ?


    I hope it's the second solution.

    --> The difficulty is when, a faillure will occurs (and it will), to find what are the missing files... as I don't backup such a large amount of files.

    • Offizieller Beitrag

    Yes, with mergerfs you only lose the data on the defective drive. And only if it wasn't backed up.


    I agree that mergerfs may complicate restores. But using rsync scripts it is possible, and even easy, to selectively restore missing files here and there from a backup snapshot.


    You remove the defective drive from the pool.


    Then, provided there is room in the pool, you essentially "invert" existing rsync snapshot backup scripts. So instead of updating the snapshot backup from the pool you update the pool from the latest snapshot.


    (I believe snapraid provides a similar functionality?)


    And if the defective drive could be fixed, you can retire it to a cold backup pool. And later add a new fresh drive to the pool to increase the free storage.


    I backup everything at least once. More typically two or three times. Sometimes more.


    By distinguishing between cold and hot data I can greatly reduce what I need to backup often. In addition I use rsync snapshots for my "hot" data, meaning that only new or changed files need to be updated in new backup snapshot. And I typically keep 7 daily, 4 weekly and 3-12 monthly snapshots. And the rsync snapshots run automatically every night. At least.


    I will mostly reuse my existing old computer hardware. At least initially. My goal is to build a really quiet server rack with a lot of storage and a lot of internal bandwidth for fast backups and restores. And big fans spinning slowly and quietly. I don't really need a lot of performance. I am mostly happy with the performance of my HC2s and my RPi4, but I want a AMD64 server in the network as well. Not everything will run on ARM.

  • Thanks for LFS informations, I think it's a way I'm going to investigate thanks to you.


    Backup of my big files is done using a ring of friends sharing the very same ressources.

    For hot data, I mainly use Online cloud resources. I'm not a fan of their data access policy (N$A...) but it's the less worth choice.



    I'm on the same mindset about silencing server.

    I used to use a pair of case with 1meter sata cable but these case are noisy, essentially due to the metal they are made of. Vibrations and echos.


    Using wood (I know mdf isn't wood), helps a lot if you use speakers damping matérial to isolates panel whithin eachothers.

    Choosing x2 compartments case is also a plus as you can manage cooling much more efficientcy.



    My HDD enclosure is managed by x3 super silent 120mm fans associated to a controler with x3 t° sensors.

    My CPU enclosure uses a 220mm fan and x2 super silent 120mm fans associated with an El MAcho almost silent cpu cooler, Those are oversized compared to the target CPU to allow quiet low speed.


    I just found back an old but efficient modular 1000w PSU that can run also silent when not too much current is drained.


    For HDD that are the biggest source of vibration I use :


    HTB1ZHvQbo_rK1Rjy0Fcq6zEvVXad.jpg


    It helps a lot !!


    My 3D printed DIY HDD supports are equiped with x2 O-ring on each attaching woodscrews to avoid any residual vibration to transmit to the panel.


    RPi4 are great, but what a shame they don't include native USB boot as they do on RPI3+


    What is HC2s please ?

    • Offizieller Beitrag

    The Odroid HC2 is a 8 core ARM32 SBC with a SATA III port and GbE. Low power and no fan, but a big aluminium cooler that is also used to mount the HDD and allow stacking of several HC2s.


    You can check out my old thread here: My new NAS: Odroid HC2 and Seagate Ironwolf 12TB.


    The HC2 is a bit dated now, given newer and more powerful ARM64 SBCs.


    It is hard to compare directly but I would say it equal to, or outperforms the RPI4 in raw CPU power. The HC2 has GbE and a SATA connector for a single 3.5" HDD, using a fast USB SATA interface. But is headless and only has a USB 2 port. So it is very specialized for use as a single HDD NAS. Indeed, HC stands for Home Cloud...


    This makes the HC2 into a very efficient and specialized way to make the storage of a HDD available on a network. It can easily saturate a GbE connection. Also it can run dockers like Plex or Emby fine, but not transcode 4K in real-time.


    I have several, each with a 12-16TB HDD. And each is connected to all the others using a GbE switch, autofs and NFS. So all the HC2s can simultaneously access all files on all HDDs for media streaming, backups or restores. As can clients like PCs, laptops, phones and tablets.

  • Real SATA and Real Ethernet Gigabit, makes a HUGE difference in these applications ! (not a 3/4xGBEth as RPI4 )

    This HC2 seems very nice, too bad RPi doesn't takes it's ideas for their achitecture.


    I love the pipe installation !!


    HC2 is pricey in Europe... 70€... twice a RPi4, and why such a heatsink, is it intended to cover a 3.5 HDD ? <- Good idea if so to build an all in one solution !!

    ODROID_HC2_01.png



    I plan to do a case for one of my Pi4 to add a 2.5" drive with USD/Sata cable

    • Offizieller Beitrag

    why such a heatsink, is it intended to cover a 3.5 HDD ?

    Yep. If you only want to use a 2.5" hard drive, get an HC1.

    • Offizieller Beitrag

    That screw with vibration decoupling looks very interesting.

    I've had a couple of Lian Li cases come with something similar but not sure where you could get them otherwise. Seem like they wouldn't help the odroid-hc* since the drive sits on the heatsink.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!