Multiple USB disks to RPI - A power problem?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Multiple USB disks to RPI - A power problem?

      Hi Guys,
      I'm an amateur geek so please help me out figure something out.
      I have OMV running on a RPI3 and I had one external HDD plugged to it (the HDD has its own power source).

      Recently I wanted to setup a RAID system so hooked up 2 other HDDs hoping to go for RAID5. But after reading on the forum I realized it's a bad idea to build a Raid using RPI and USB-attached HDDs. So I gave up the idea of a RAID and just using all 3 HDDs separately.

      However, I have noticed that since I added the 2 additional drives, the write/read speed of my initial disk has dropped tremendously. I don't know how to measure that accurately, just estimating it by comparing copy time before and after linking the other disks.

      All 3 disks have their own power supply so I'm not sure what is causing this slow down. Could it be that eventhough the disks have their power source, each USB cable requires a bit of Amp from the RPI and it's being unable to supply it? Would linking all 3 disks to a common "powered usb hub" and then linking this powered hub to the rpi fix that problem? Because this way the rpi can dedicate its power for only 1 port and then the powered hub takes care of the remaining.

      I would appreciate any opinion or advice. Thank you.
    • I don't think it's a power issue. First of all, RAID on an RPi is a very very bad idea, mainly because multiple drives on an RPi is a very bad idea.

      The problem with the RPi hardware is that a single slow 100Mb/s bus is shared by all four USB ports AND the Ethernet ports. It doesn't have 4 independent USB ports, but a hub plugged into a single bus, that is also shared by the network traffic. Plugging 3 drives into an RPi, while using the network, is a recipe for slowness.

      This is why the RPi is an extremely poor choice for a NAS.
    • Nibb31 wrote:

      I don't think it's a power issue. First of all, RAID on an RPi is a very very bad idea, mainly because multiple drives on an RPi is a very bad idea.

      The problem with the RPi hardware is that a single slow 100Mb/s bus is shared by all four USB ports AND the Ethernet ports. It doesn't have 4 independent USB ports, but a hub plugged into a single bus, that is also shared by the network traffic. Plugging 3 drives into an RPi, while using the network, is a recipe for slowness.

      This is why the RPi is an extremely poor choice for a NAS.
      I see. Thanks for the reply.
      Well I did give up the RAID project, only using all 3 disks separately.

      So as weird as it sounds, it would be wiser to make the RPI3 connect wirelessly to the router rather than being plugged in? This would dedicate its bus for transfer to USB rather than both transfer from router and to usb HDDs.

      I do realize this is not the best setup. I'm currently in an apartment and jut wanting a small setup. Maybe when one day I get my own house I'll build a decent NAS. But for now all I need is some over wifi time machine backup and the plex media server plugin. They do run well I must say. The only problem is when loading big files to the HDD like the movie library... I literally takes forever.

      So to be clear, trying to add a powered usb hub has no benefit given that my bottleneck is the processing of the info rather than the "flow" of it.

      And would you recommend trying connecting RPI running OMV wirelessly to the router rather than ethernet? Sounds illogical, but the way to go if I correctly understand the problem.
    • Nibb31 wrote:

      I'm pretty sure the Wifi and Bluetooth are on the same bus as the rest.
      Nope, they're not. The onboard Wi-Fi of any Raspberry Pi is on an own bus (ultra slow SDIO) and is even more crappy than the Ethernet having to share USB bandwidth.

      Just like using any RPi as NAS is a terrible idea since those things are so horribly crippled wrt network and IO 'speed' the idea to prefer onboard Wi-Fi over Ethernet is even more terrible since then your performance suffers even more.

      With the crappy onboard Ethernet and our OMV image you get at least around ~10 MB/s (the 'shared bus' is not the bottleneck here but the ultra slow Ethernet variant RPi folks chose for their boards). With onboard Wi-Fi it's impossible to exceed 4 MB/s and also latency is a lot worse.

      More details: Raspberry PI / OMV... ethernet or wifi?


      Nibb31 wrote:

      The problem with the RPi hardware is that a single slow 100Mb/s bus is shared by all four USB ports AND the Ethernet ports
      Not exactly. The SoC on all those Raspberries is basically the same since 2012. It's a VideoCore IV made for TV boxes or 'smart' gadgets and therefore lacking any decent IO capabilities. Only 'high speed' interface to the outside is one single USB2 connection (theoretical 480 Mbps, due to several design flaws in reality a single USB2 device can not exceed 37.5MB/s there) connected to an internal USB hub with embedded Fast Ethernet interface.

      The onboard Wi-Fi is hanging off an own 'slow speed' interface called SDIO that has not to share bandwidth with the USB2 port but the laughable onbord Wi-Fi implementation on Raspberries is too slow for any decent NAS operation anyway. The maximum you could get is at 40% of what's possible with the slow 'Fast Ethernet' those Raspberries are equipped with.

      hekmatk wrote:

      However, I have noticed that since I added the 2 additional drives, the write/read speed of my initial disk has dropped tremendously.
      As expected since all disks sit on the same bus and the RPi folks never explored or added support for more efficient ways to access disks on Raspberries anyway. If your device is limited to USB2 you definitely want UAS (USB Attached SCSI) especially if you have to access more than one disk in parallel. The old 'mass storage' / BOT mode is all the RPi can and that's a joke with modern USB disks.

      The best way to deal with your problem is to throw away your RPi (or use it for something that does NOT depend on network or IO) and to replace it with one of the many cheap and more energy efficient ARM boards out there (those ultra cheap Allwinner thingies all have 4 real USB2 ports and many of them real Gigabit Ethernet on another bus): Which energy efficient ARM platform to choose?

      The post was edited 1 time, last by tkaiser ().

    • tkaiser wrote:

      Nibb31 wrote:

      I'm pretty sure the Wifi and Bluetooth are on the same bus as the rest.
      Nope, they're not. The onboard Wi-Fi of any Raspberry Pi is on an own bus (ultra slow SDIO) and is even more crappy than the Ethernet having to share USB bandwidth.
      Just like using any RPi as NAS is a terrible idea since those things are so horribly crippled wrt network and IO 'speed' the idea to prefer onboard Wi-Fi over Ethernet is even more terrible since then your performance suffers even more.

      With the crappy onboard Ethernet and our OMV image you get at least around ~10 MB/s (the 'shared bus' is not the bottleneck here but the ultra slow Ethernet variant RPi folks chose for their boards). With onboard Wi-Fi it's impossible to exceed 4 MB/s and also latency is a lot worse.

      More details: Raspberry PI / OMV... ethernet or wifi?


      Nibb31 wrote:

      The problem with the RPi hardware is that a single slow 100Mb/s bus is shared by all four USB ports AND the Ethernet ports
      Not exactly. The SoC on all those Raspberries is basically the same since 2012. It's a VideoCore IV made for TV boxes or 'smart' gadgets and therefore lacking any decent IO capabilities. Only 'high speed' interface to the outside is one single USB2 connection (theoretical 480 Mbps, due to several design flaws in reality a single USB2 device can not exceed 37.5MB/s there) connected to an internal USB hub with embedded Fast Ethernet interface.
      The onboard Wi-Fi is hanging off an own 'slow speed' interface called SDIO that has not to share bandwidth with the USB2 port but the laughable onbord Wi-Fi implementation on Raspberries is too slow for any decent NAS operation anyway. The maximum you could get is at 40% of what's possible with the slow 'Fast Ethernet' those Raspberries are equipped with.

      hekmatk wrote:

      However, I have noticed that since I added the 2 additional drives, the write/read speed of my initial disk has dropped tremendously.
      As expected since all disks sit on the same bus and the RPi folks never explored or added support for more efficient ways to access disks on Raspberries anyway. If your device is limited to USB2 you definitely want UAS (USB Attached SCSI) especially if you have to access more than one disk in parallel. The old 'mass storage' / BOT mode is all the RPi can and that's a joke with modern USB disks.
      The best way to deal with your problem is to throw away your RPi (or use it for something that does NOT depend on network or IO) and to replace it with one of the many cheap and more energy efficient ARM boards out there (those ultra cheap Allwinner thingies all have 4 real USB2 ports and many of them real Gigabit Ethernet on another bus): Which energy efficient ARM platform to choose?
      Wow, thank you for the very detailed reply! So yeah I guess I'll just stick to the slow ethernet. I do realize rpi is NOT the best option for a NAS but I jut wanted a low price entry gadget that I could use for other projects later on. Eventually when I get my own house I'll invest into a serious server with the knowledge I'd have gained by using rpi.

      As for now, all I'm using it is for over wifi time machine backup for my mac along with Plex media server and honestly it's doing a decent job. The only problem is when I want to transfer movies to my database hard disks. But If I do it one movie at a time, it's ok. I'm suffering now because I'm throwing the bulk of my library onto the disk, but this should be a one time thing.

      Thanks again!
    • hekmatk wrote:

      As for now, all I'm using it is for over wifi time machine backup for my mac along with Plex media server and honestly it's doing a decent job

      I strongly disagree since I'm always doing backup for a reason: That's being able to recover from data losses or even the loss of a whole machine. The time to backup something is more or less irrelevant. What matters is time needed for restore and desaster recovery.

      Let's assume you backup a Mac with 250 GB data on it. As explained in the other thread with an RPi you're pretty limited with the RPi onboard interfaces. But when talking about TimeMachine that uses so called 'sparse bundles' when using network shares the numbers provided over there have to be taken with care. It's not 'less than 4 MB/s' and 'around 10 MB/s' with onboard Wi-Fi or Ethernet but way lower numbers since TM has some abstraction layers and partially depends on low network latency. So in reality with an RPi connected with its own onboard Wi-Fi especially in a crowded area you'll see backup throughput of less than 1MB/s or even (way) below. If 2.4GHz band is saturated latency sucks horribly and so does backup and restore performance.

      With RPi's own Fast Ethernet (even if your Macs use Wi-Fi with a decent AP) we're talking about 5-6 MB/s in reality as maximum. Concerned about backup speeds? You should if you have not already a working backup ('working' is defined as properly tested!).

      But backup speed is not that important compared to restore or desaster recovery times.

      Imagine you have to setup your machine again or need to continue on a new Mac (since the old one is damaged, stolen, whatever). Now doing a full restore at 5 MB/s means waiting ~15 hours (onboard Ethernet) but you should be prepared that restoring especially small files might take a lot longer (macOS contains of houndreds of thousands small files ;) ). I wouldn't be surprised if it takes almost a full day to restore 250 GB when TM has to run on a slow RPi.

      When you use an external Gigabit Ethernet adapter you won't benefit that much since you might see only an increase by 3 times (as explained: bandwidth is now bottlenecked by 'shared bus' so only twice as fast compared to Fast Ethernet but latency gets a little bit better compared to the onboard 100 Mbits/sec implementation). You might still need to wait 8 or more hours for a full restore.

      Now use a cheap ARM board with native Gigabit Ethernet and fast storage interfaces not bottlenecked by network and you're done in less than 2 hours. Again: Which energy efficient ARM platform to choose?

      And now imagine you would believe into onboard Wi-Fi of your RPi would be faster than onboard Ethernet (which never is true. NEVER!) and prevent both client and your TimeMachine-RPi being able to be connected with an Ethernet cable. With Wi-Fi TM restore throughput rates can pretty easily drop below the 1 MB/s barrier and might almost stall in crowded environments (your neighbours define maximum performance in your wireless 2.4 GHz LAN, it's never you). In such a situation a full restore of 250 GB can take a week or even more if it doesn't get aborted earlier already (Restores need to be tested. Always!)

      The post was edited 1 time, last by tkaiser ().

    • tkaiser wrote:

      hekmatk wrote:

      As for now, all I'm using it is for over wifi time machine backup for my mac along with Plex media server and honestly it's doing a decent job
      I strongly disagree since I'm always doing backup for a reason: That's being able to recover from data losses or even the loss of a whole machine. The time to backup something is more or less irrelevant. What matters is time needed for restore and desaster recovery.

      Let's assume you backup a Mac with 250 GB data on it. As explained in the other thread with an RPi you're pretty limited with the RPi onboard interfaces. But when talking about TimeMachine that uses so called 'sparse bundles' when using network shares the numbers provided over there have to be taken with care. It's not 'less than 4 MB/s' and 'around 10 MB/s' with onboard Wi-Fi or Ethernet but way lower numbers since TM has some abstraction layers and partially depends on low network latency. So in reality with an RPi connected with its own onboard Wi-Fi especially in a crowded area you'll see backup throughput of less than 1MB/s or even (way) below. If 2.4GHz band is saturated latency sucks horribly and so does backup and restore performance.

      With RPi's own Fast Ethernet (even if your Macs use Wi-Fi with a decent AP) we're talking about 5-6 MB/s in reality as maximum. Concerned about backup speeds? You should if you have not already a working backup ('working' is defined as properly tested!).

      But backup speed is not that important compared to restore or desaster recovery times.

      Imagine you have to setup your machine again or need to continue on a new Mac (since the old one is damaged, stolen, whatever). Now doing a full restore at 5 MB/s means waiting ~15 hours (onboard Ethernet) but you should be prepared that restoring especially small files might take a lot longer (macOS contains of houndreds of thousands small files ;) ). I wouldn't be surprised if it takes almost a full day to restore 250 GB when TM has to run on a slow RPi.

      When you use an external Gigabit Ethernet adapter you won't benefit that much since you might see only an increase by 3 times (as explained: bandwidth is now bottlenecked by 'shared bus' so only twice as fast compared to Fast Ethernet but latency gets a little bit better compared to the onboard 100 Mbits/sec implementation). You might still need to wait 8 or more hours for a full restore.

      Now use a cheap ARM board with native Gigabit Ethernet and fast storage interfaces not bottlenecked by network and you're done in less than 2 hours. Again: Which energy efficient ARM platform to choose?

      And now imagine you would believe into onboard Wi-Fi of your RPi would be faster than onboard Ethernet (which never is true. NEVER!) and prevent both client and your TimeMachine-RPi being able to be connected with an Ethernet cable. With Wi-Fi TM restore throughput rates can pretty easily drop below the 1 MB/s barrier and might almost stall in crowded environments (your neighbours define maximum performance in your wireless 2.4 GHz LAN, it's never you). In such a situation a full restore of 250 GB can take a week or even more if it doesn't get aborted earlier already (Restores need to be tested. Always!)
      Dang... Man you really do hate OMV on RPI don't you? ;)
      But I do agree with you. It is unpractical. HOWEVER, in my particular case it is feasible because my mac is 120GB total storage and it's literally all blank except for a dropbox folder. So my time machine is mainly for the apps and their settings (which still is not a small size, I agree).

      The first backup was a pain (took about a day) then it happens in small increments which each takes ~10-15minutes. It IS extremely slow, but I have my mac turned on so don't really mind.

      As for the restore plan, yes it will be a pain to restore if I need it but I had a backup plan. There are software like Paragon ExTFS which allow you to read ext4 files directly on mac (I tried it, it works). They give you a 10days free trial. So if I'm in an emergency mode, I will simply download this software on the new mac, plug the NAS HDD directly into my mac rather than RPI, mount the drive and just restore from it. This was my "emergency" plan.

      Don't get me wrong though. I FULLY agree with you and I appreciate your input. But for now I'm jsut training on the rpi and eventually some day I will get a decent system and hopefully be able to mount a proper raid and all.

      So yeah for anyone reading this, if you do not have a setup already, RPI is NOT the best idea. But if you're on a budget and just want to see what this is about, I still believe rpi is the way to go.

      About the ARM boards. I had actually never heard of them. I'm not too knowledgeable so I just flashed omv onto the rpi sd card using fether, simple as this. Would it have been an equally easy process on the ARM?
    • hekmatk wrote:

      But if you're on a budget and just want to see what this is about, I still believe rpi is the way to go.
      Nope. The RPi is never the device to choose for any sort of NAS since it's way too crippled wrt IO. Literally every other ARM board is better since at least providing one real Ethernet port + 1 x USB2 or at least two USB2 ports so you can attach a cheap RTL8153 Gigabit Ethernet dongle and access an USB disk without being bottlenecked by 'shared bus' as on the RPi.

      hekmatk wrote:

      About the ARM boards. I had actually never heard of them. I'm not too knowledgeable so I just flashed omv onto the rpi sd card using fether, simple as this. Would it have been an equally easy process on the ARM?
      Sure, it's exactly the same process, it's exactly the same 'look and feel' but unlike with your RPi you get decent performance.

      For your information: the OMV image you currently use on your RPi is made by me and based on one for a really good NAS ARM board (Clearfog Base, magnitudes faster than any RPi 3). I wasted my time to provide this image so users already owning an RPi could get best OMV experience possible. The other reason to develop this image was to provide tools for RPi users to fight a very common and unqiue Raspberry Pi problem: powering problems -- undervoltage. Almost every RPi user is affected and almost no one is knowing since RPi Foundation does such a great job in masquerading problems instead of solving them.

      Wrt your TimeMachine restore scenario and the use of some Paragon software you're playing a really dangerous game (I already mentioned 'sparse bundles') but this is off-topic here and people who do not test backup/restore probably need some data loss first to learn :)

      The post was edited 1 time, last by tkaiser ().

    • tkaiser wrote:

      hekmatk wrote:

      But if you're on a budget and just want to see what this is about, I still believe rpi is the way to go.
      Nope. The RPi is never the device to choose for any sort of NAS since it's way too crippled wrt IO. Literally every other ARM board is better since at least providing one real Ethernet port + 1 x USB2 or at least two USB2 ports so you can attach a cheap RTL8153 Gigabit Ethernet dongle and access an USB disk without being bottlenecked by 'shared bus' as on the RPi.

      hekmatk wrote:

      About the ARM boards. I had actually never heard of them. I'm not too knowledgeable so I just flashed omv onto the rpi sd card using fether, simple as this. Would it have been an equally easy process on the ARM?
      Sure, it's exactly the same process, it's exactly the same 'look and feel' but unlike with your RPi you get decent performance.
      For your information: the OMV image you currently use on your RPi is made by me and based on one for a really good NAS ARM board (Clearfog Base, magnitudes faster than any RPi 3). I wasted my time to provide this image so users already owning an RPi could get best OMV experience possible. The other reason to develop this image was to provide tools for RPi users to fight a very common and unqiue Raspberry Pi problem: powering problems -- undervoltage. Almost every RPi user is affected and almost no one is knowing since RPi Foundation does such a great job in masquerading problems instead of solving them.

      Wrt your TimeMachine restore scenario and the use of some Paragon software you're playing a really dangerous game (I already mentioned 'sparse bundles') but this is off-topic here and people who do not test backup/restore probably need some data loss first to learn :)
      I hope to never get data loss to learn my lesson :P

      As for the rpi, it seems like I really made a poor choice...

      I currently have some exams coming soon so I should not put much time into this. But once done I may want to consider to switch. Would you happen to know any good step by step guid for flashing OMV to ARM? And if I make the switch will I lose all my current data? I think the "ownership" of the files will be a problem?

      Thank you!
    • hekmatk wrote:

      I hope to never get data loss to learn my lesson
      Quick off-topic TimeMachine restore performance excursion. When backing up you transfer data from Mac to the OMV box (Linux), when restoring through the network it's the other way around. When we played around with this stuff 10 years ago we realized that OS X starting with 10.4 back then had a huge problem with other OS' TCP/IP stacks resulting in a horribly low server --> Mac performance (the boring details).

      When I encountered this the first time I sniffed the traffic and saw this:

      Source Code

      1. AFP RTT Statistics:
      2. Commands Calls Min RTT Max RTT Avg RTT
      3. ...
      4. FPReadExt 110 0.00026 2.16564 0.28983
      5. ...

      Average .289 msec RTT (round trip time) for the packets transmitted (Netatalk default --> 128K) so we can do the math and are at laughable 0.5MB/s at best (in reality restore throughput was at 0.35MB/s). As soon as we executed on the Mac client the following restore preformance was back at normal (+50 MB/s):

      Source Code

      1. sysctl -w net.inet.tcp.delayed_ack=2
      This simple 'fix' led to an increase in restore performance from 0.35 MB/s to +50 MB/s. That's more than hundred times faster or the simple difference between 'less than an hour' and 'at least three full days' to restore the same 120 GB!


      That's why you should test a full restore. Since even if your actual backup times are already lousy a restore might take magnitudes longer. Same with the idea to mount an ext4 filesystem locally on your Mac -- you're accessing the wrong layer (below the sparse bundle) and you will not be able to restore from a TM backup directly at installation time (desaster recovery will take way longer and after a full restore you'll have to adjust a lot of settings)

      The post was edited 1 time, last by tkaiser ().

    • hekmatk wrote:

      Would you happen to know any good step by step guid for flashing OMV to ARM?
      As easy as with the Raspberries since it's the same stuff -- please scroll down to the readme at the bottom: sourceforge.net/projects/openm…s/Other%20armhf%20images/

      For whatever reasons you can not migrate settings from one OMV installation to another (so you need to configure everything as before) but if you create all your user accounts in the same way and especially order you end up with an identical ownership/permission situation. Your current OMV installation on your RPi is not based on Raspbian but on the same Armbian as every other ARM board we support so everything is identical except bootloader/kernel (and lousy performance of course ;) )
    • tkaiser wrote:

      hekmatk wrote:

      Would you happen to know any good step by step guid for flashing OMV to ARM?
      As easy as with the Raspberries since it's the same stuff -- please scroll down to the readme at the bottom: sourceforge.net/projects/openm…s/Other%20armhf%20images/
      For whatever reasons you can not migrate settings from one OMV installation to another (so you need to configure everything as before) but if you create all your user accounts in the same way and especially order you end up with an identical ownership/permission situation. Your current OMV installation on your RPi is not based on Raspbian but on the same Armbian as every other ARM board we support so everything is identical except bootloader/kernel (and lousy performance of course ;) )
      I really do think I need to switch now. I'm fully convinced. My only concern is that if I switch, I want to go big this time, not just a tiny improvement. By that I mean I want to start using RAID and hopefully expand. Given that ARMs all are USBs (some do have few sata apparently), I guess my best bet would be to go ahead an build a real desktop PC just like the good old days...

      This will probably be my hobby project for after the exams and I will definitely ask for some advices before investing in it. Just not to end up with a lousy performance unit as you have very hinted to previously in a not too subtle way :D
    • hekmatk wrote:

      I guess my best bet would be to go ahead an build a real desktop PC just like the good old days...
      If the goal is to waste energy and money for nothing then 'a real desktop PC' is always a great idea ;)

      We use those huge Xeon boxes with plenty of CPU cores and at least 256 GB ECC DRAM still at customers (but of course never with RAID5 since being the totally wrong concept in the meantime -- we use what's here described) but for 'home usage'? I'm happy with those good ARM boards providing performant IO (native SATA) and networking (up to 2.5GbE) since I don't need high availability at home (RAID) but only data integrity and data protection :)

      I like highest performance and lowest (idle) consumption at the same time.
    • tkaiser wrote:

      hekmatk wrote:

      I guess my best bet would be to go ahead an build a real desktop PC just like the good old days...
      If the goal is to waste energy and money for nothing then 'a real desktop PC' is always a great idea ;)
      We use those huge Xeon boxes with plenty of CPU cores and at least 256 GB ECC DRAM still at customers (but of course never with RAID5 since being the totally wrong concept in the meantime -- we use what's here described) but for 'home usage'? I'm happy with those good ARM boards providing performant IO (native SATA) and networking (up to 2.5GbE) since I don't need high availability at home (RAID) but only data integrity and data protection :)

      I like highest performance and lowest (idle) consumption at the same time.
      Well, I thought I'd go wit ha PC because I can keep expanding. For cheap I can get a motherboard than can take 6 SATA as I am mostly concerned with increasing my capacity using rather cheap 3TB disk drives. So far I have 3 hard disks and the wiring looks ugly. It would be nice to have them all in a clean case especially if I'm planing to expand further.

      Now there's raidZ? I'm not even able to fully understand RAID. I really should look into understanding some core concepts more before moving forward. But to solve that issue of requiring multiple disks to be connected to the ARM do you think this is an easy issue to fix? Because ARMs are definitely WAY cheaper (and neater) than a full desktop PC...
    • tkaiser wrote:

      hekmatk wrote:

      I guess my best bet would be to go ahead an build a real desktop PC just like the good old days...
      If the goal is to waste energy and money for nothing then 'a real desktop PC' is always a great idea ;)
      We use those huge Xeon boxes with plenty of CPU cores and at least 256 GB ECC DRAM still at customers (but of course never with RAID5 since being the totally wrong concept in the meantime -- we use what's here described) but for 'home usage'? I'm happy with those good ARM boards providing performant IO (native SATA) and networking (up to 2.5GbE) since I don't need high availability at home (RAID) but only data integrity and data protection :)

      I like highest performance and lowest (idle) consumption at the same time.
      Man I have a question for you. Given that you dislike rpi (which I agree with you) and dislike RAID apparently.

      Here's my scenario: I run plex on OMV (this is the main reason why I have OMV, in addition to time machine). Apparently plex can only fetch videos which are on ONE HDD. My current HDD is 3TB so once it is full, I will need to get a 6TB or whatever, in order to keep all my movies on 1 HDD so plex can read it. Do note I have another 3TB which is empty, but I cannot put movies there because plex can only be fetching from a single HDD (plz correct me if I'm wrong).

      My idea/argument is that if I have a RAID setup, I would mount these two 3TB disks into an array (I believe called "pool"?) of 6TB and now plex can see this array as a single unit and fetch media from it.

      The result would be I have 6TB of storage that is accessible by plex (I know such a setup is not fault tolerant) rather than having 3TB accessible and 3TB just sitting there.

      I hope I made my scenario clear. Please let me know what you think and if I'm understand this RAID setup think correctly.

      Thank you!
    • What makes you think that Plex can only have videos on a single HDD ? You can have multiple libraries, and each library can source multiple folders anywhere on the system. Even if it was true (it isn't), you could still pool together your drives with mergerfs to make them appear as a single drive.

      You don't need RAID to do pooling.
    • Nibb31 wrote:

      What makes you think that Plex can only have videos on a single HDD ? You can have multiple libraries, and each library can source multiple folders anywhere on the system. Even if it was true (it isn't), you could still pool together your drives with mergerfs to make them appear as a single drive.

      You don't need RAID to do pooling.
      Thank you for the reply.
      The reason I believe Plex can only use a single HDD is because in its interface, I get to pick only a sinlge HDD as my database (plz refer to screenshot). So I cannot have files partially on "EXT1" in my case and some other external disk. Please correct me if I'm missing a point here.

      As for Mergefs, Thank you for pointing it out! I was unaware such a thing exists! Yes this most likely will bypass my need for RAID. I will try to read a bit about it but if you have any nice tutorial please link me.

      However, if it turns out I'm missing a point and Plex can fetch data from multiple disks, then I will no longer need Mergefs (and definitely not RAID).

      Thanks!
      Images
      • Screen Shot 2018-03-14 at 10.15.28 AM.png

        346.53 kB, 2,560×1,600, viewed 97 times
    • macom wrote:

      hekmatk wrote:

      I get to pick only a sinlge HDD as my database
      The database is only for metadata, not for the media data itself.
      Goodness! I was blind and now I see! This actually solves a LOT of my problems!

      I jut realized that from the plex server you can add more and more folders from whichever disk you want...

      So to make sure I understand this properly. If I had an SSD, this is where I should point my database in order to have ultimate performance, right?

      Thanks!