Add drives to an existing array, then extend the existing partition/file system?

    • OMV 3.x
    • New

      daveinfla wrote:


      From what I've read you can't find anything other than a Server board to support ECC these days and it's a requirement for ZFS.

      Congrat's Dave. You caught the attention of a couple OMV guru's and have been christened! :)
      (Trust me; you will learn a few things, about a few things, on this forum. :D )

      But, really, if you connect the dots, you'd find from your own testing that ZFS does not require ECC. You've been testing in a VM, right? If it doesn't exist on the host, a VM doesn't simulate ECC RAM. So,, you've been testing ZFS on non-ECC ram.
      Regardless, in your production box you'll have ECC so, you're on the right track with the right hardware.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • New

      flmaxey wrote:

      you've been testing ZFS on non-ECC ram
      Sorry, but there is nothing to test (besides Row hammer and friends). It's only about expectations.

      One basic misunderstanding about hardware is that 'hardware works as it's supposed to'. No it's not, hardware fails pretty often. The idea that 0 is 0 and 1 is 1 is nice but it needs sometimes a lot of efforts for this to really happen. The most basic learning here is that DRAM can fail and fails from time to time even with some added redundancy like simple ECC (there's more advanced stuff like IBM's Chipkill too).

      Then there's this other misunderstanding that ZFS due to its implementation of data integrity checking and self-healing capabilities would cause great harm when used on systems lacking ECC memory. That's just the 'scrub of death' myth unfortunatenly an awful lot of people believing into (me for example being amongst for quite some time).

      This needs no testing but just some understanding: ECC DRAM and ZFS are two different things, the former is not requirement for the latter, if you love your data you choose both and if you can't or don't want to afford ECC DRAM there are even more reasons to choose ZFS then (or btrfs or ReFS, not talking about Apple's APFS yet)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      flmaxey wrote:

      If you want to do anything with a network share, it must start with a "shared folder". I call it, the "base share".
      I knew this, I think it was the way it was presented in the GUI that confused me. It's the same thing with NTFS permissions and shares.


      flmaxey wrote:

      4. You don't need to automate a scrub.
      Good to know, I'm sure it mentioned that if I dug far enough?


      flmaxey wrote:

      I use 32GB USB 3.0 thumb drives

      flmaxey wrote:

      USB drives are easy to clone
      I spotted mention about the thumb drive plug-in elsewhere, can you explain how you clone your thumb drives? Can it be done while the production one is in use or do you have to do this on say a workstation using a clone utility?

      What's involved in recovery if the flash drive fails?

      On these same lines, I've seen mention of using SSD drives as a cache. Is this overkill for say a 4 -5 drive (Raid-Z1) system running off of a USB flash drive?


      Sc0rp wrote:

      ECC is not only bound to "serverboards" - but to "servergrade chipsets"

      tkaiser wrote:

      if you love your data and hate bit rot you spend the extra money on ECC DRAM
      I miss spoke, what I meant to say is exactly what you both mentioned about Server chipsets and ECC. Other then maybe an actual workstation motherboard and of course ASRock, which I did entertain until I found the 2 Supermicro boards laying around with 16GB of ECC memory in one of them :). To me using ECC is a no brainer, however I can see the added cost of a server grade board, processor, or the ASRock would put some people off.

      I think the ECC required rumor maybe started with FREENAS, they simply insist you use it.

      Someone mentioned AMD, I'm personally not a fan of AMD and I've read elsewhere that their support for ECC isn't fully implemented. I tried to find the original article but found the following instead:

      hardwarecanucks.com/forum/hard…amds-ryzen-deep-dive.html


      I have a couple of new questions, just trying to plan ahead.

      I understand that ZFS wants direct control of the drives so using the on-board SATA controller, in JBOD mode, is preferred to a hardware RAID controller. The Supermicro has 6 connectors which will be plenty for now. However, if I ever need to upgrade in the future, is there a preferred add-on JBOD card anyone can recommend?

      Someone please explain the versions listed on Sourceforge, I totally missed the link "Looking for the latest version?" with 3.0.86 listed and downloaded 3.0.94 at the top of the list. Is 3.0.94 beta? What about 4.0.14? Neither the download nor the Sourceforge page is very clear on whats the latest release vs. beta...

      How about UPS support? Is there a plug-in for monitoring and automatic shutdown?

      I have a APC 1500VA on my primary server, however it's USB based. Is there a Windows utility that would report to the OMV box allowing it to auto shutdown? Or should I just use a separate UPS for the OMV server?

      Thanks,

      Dave
    • New

      daveinfla wrote:

      However, if I ever need to upgrade in the future, is there a preferred add-on JBOD card anyone can recommend?
      If it's about 4 more SATA ports combined with spinning rust and almost all topologies IMO a cheap Marvell 88SE9215 is sufficient (there exists a slightly more expensive version using 2 PCIe 2.x lanes that is a better idea when accessing SSDs or using performant storage topologies though). Personally I also prefer that these things aren't reflashed RAID controllers since those might adjust drive geometry when operated in the wrong modes (most probably one of the reasons ZFS stores its own metadata at the end of the disks?).

      Wrt ECC DRAM 'requirement' IMO to postulate this makes some sense from a support perspective (you don't want to deal with average Joe every other day playing RAID-Z on crap, failing due to crappy hardware and blaming your software being the reason, it's boring, annoying and just a waste of time). And while I'm a strong supporter of ECC DRAM I'm not happy with this wrong conclusion that got spread. Especially if your system is not equipped with ECC DRAM then using ZFS, btrfs, ReFS (and hopefully soon APFS) is even more important. You increased the risk of bit rot so now you deserve to be informed easy and early whether that happened. That's what this whole checksumming stuff is all about.

      OMV 3 is stable since at least 3.0.8x and for OMV 4 you just need to wait for an official announcement.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      daveinfla wrote:

      How about UPS support? Is there a plug-in for monitoring and automatic shutdown?

      I have a APC 1500VA on my primary server, however it's USB based. Is there a Windows utility that would report to the OMV box allowing it to auto shutdown? Or should I just use a separate UPS for the OMV server?
      For OMV there is the NUT-Plugin available, which is for monitoring and automatic shutdown of the OMV box. To use the UPS for both servers you have to do some things:
      1. Connect the power lines of both servers to the UPS
      2. Remove the USB connection of the UPS from your Windows server and connect it to the OMV box
      3. Enable and configure remote monitoring in the NUT plugin of OMV.
      4. Use a nut client (e.g. WinNUT) on the Windows server to watch for the UPS state of OMV
      Maybe it is also possible to do it vice versa with an Nut client on OMV. Never searched for this.

      In this thread an other user had a similar question: UPS Nut remote monitoring where he wrote how to configure.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • New

      tkaiser wrote:

      flmaxey wrote:

      you've been testing ZFS on non-ECC ram
      Sorry, but there is nothing to test (besides Row hammer and friends). It's only about expectations.
      One basic misunderstanding about hardware is that 'hardware works as it's supposed to'. No it's not, hardware fails pretty often. The idea that 0 is 0 and 1 is 1 is nice but it needs sometimes a lot of efforts for this to really happen. The most basic learning here is that DRAM can fail and fails from time to time even with some added redundancy like simple ECC (there's more advanced stuff like IBM's Chipkill too).

      Then there's this other misunderstanding that ZFS due to its implementation of data integrity checking and self-healing capabilities would cause great harm when used on systems lacking ECC memory. That's just the 'scrub of death' myth unfortunatenly an awful lot of people believing into (me for example being amongst for quite some time).

      This needs no testing but just some understanding: ECC DRAM and ZFS are two different things, the former is not requirement for the latter, if you love your data you choose both and if you can't or don't want to afford ECC DRAM there are even more reasons to choose ZFS then (or btrfs or ReFS, not talking about Apple's APFS yet)
      The intent of the post was to point out that ZFS works on non-ecc systems and that (even if unnoticed) Dave verified that for himself.

      On the "scrub of death", I read a piece that proposed the possibility. After that, I read the rest of that particular piece strictly for entertainment. Frankly, even if there was a math model that could demonstrate the possibility, in theory, it still wouldn't make sense in practical terms.
      I based that belief and surety on the following:
      There's no way the designers of ZFS wouldn't sift something like that out of the code in beta tests or later in field deployments. (We're talking about SUN Microsystems. Servers and Solaris were their bread and butter business.) Given it's age, and with Solaris migrating to Intel platforms, I'm absolutely certain that ZFS has been running on thousands of non-ECC boxes for years as time. If it was possible, a phenomenon like the "scrub of death" would have manifested itself a long time ago, and been corrected. Accordingly, and without knowing a thing about the underlying code, the "scrub of death" notion couldn't be taken seriously.

      In Dave's case, he's reusing actual purpose built server hardware, Mobo, Xeon Processor, 16GB ECC Ram, etc. and he had the forethought to provision for spares. So, if he adopts OMV and ZFS, he should have a solid NAS.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • New

      daveinfla wrote:

      3.0.86 vs. 3.0.94
      There's no such thing. OMV 3 is stable so simply use whatever installation method you want, apply latest upgrades and you're at 3.0.94 anyway.

      I don't try to do one-to-one consultations here (just trying to clarify some stuff from time to time in the hope some ideas get spread) so it's totally up to you to decide whether you start with OMV 3 or (soon being ready) OMV 4.

      IMO it's very important to understand what you're doing when choosing your NAS setup and that's also my main point of criticism when we're talking about OMV in general. The ease of use allows users doing stupid things way too often (talking about this RAID5 madness here)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      flmaxey wrote:

      The intent of the post was to point out that ZFS works on non-ecc systems and that (even if unnoticed) Dave verified that for himself.
      'Works' means nothing.

      You need data corruption caused by bit flips in memory to get the idea how ZFS copes with that. Funnily with ARM boards this is pretty easy since for whatever reasons even mainline u-boot maintainers usually don't give a shit about reliability and commit wrong DRAM clockspeeds upstream.

      So if you really want to test for this specific issue it's most easy with a cheap ARM board like NanoPi NEO2. Simply overclock DRAM to 696, 720 or 744 MHz and you get bit flips for free.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      tkaiser wrote:

      for whatever reasons even mainline u-boot maintainers usually don't give a shit about reliability and commit wrong DRAM clockspeeds upstream.
      Just as a reference (for Google or whoever). This is what Armbian (and so OMV too) tries to prevent this: github.com/armbian/build/blob/…lt-dram-clockspeeds.patch

      The majority of these lowered clockspeeds is confirmed to work stable by at least some users. If I roll out tiny NAS devices to do this and that I usually apply patches that lower DRAM speed by another 48 MHz just to get some safety headroom.

      But with this in mind it's really easy to test through ZFS data integrity issues. Simply use a platform that allows you to turn your memory into something producing bit flips every now and then (as soon as you do you also understand a certain amount of Windows BSOD happening)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      So I did a test run on the SuperMicro motherboard and memory using some 1TB drives I have laying around.

      I noticed some errors when installing the ZFS plugin on the VM but didn't note them at that time, I also received the following errors when installing it on the physical MB. Anything to worry about?

      Building initial module for 4.9.0-0.bpo.4-amd64
      configure: error:
      *** Please make sure the kmod spl devel <kernel> package for your
      *** distribution is installed then try again. If that fails you
      *** can specify the location of the spl objects with the
      *** '--with-spl-obj=PATH' option.
      Error! Bad return status for module build on kernel: 4.9.0-0.bpo.4-amd64 (x86_64)

      Then several lines down..

      cp: cannot stat '/var/lib/dkms/spl/0.6.5.9/build/spl_config.h': No such file or directory
      cp: cannot stat '/var/lib/dkms/spl/0.6.5.9/build/module/Module.symvers': No such file or directory

      It appears to work, I have 5 drives setup as a Raid-Z1 VDEV on pool1.

      I also noticed that even though I have ZFS setup using all the drives they still show up as available when you click on "Create File System". Why?

      Would there be any reason to turn S.M.A.R.T. on or would that interfere with ZFS?

      Dave
    • New

      Of the times I've installed ZFS, VIA the plugin, there was only one iteration where there were no error or exception messages at all. (I think I've done it 5 or 6 times.) In all installs, there was only one instance where it didn't work and, in that case, the "ZFS" menu didn't show up under Storage. After it failed, in a second attempt a few hours later, it installed and worked correctly.

      So, if your array is working, I want to say it's OK. (But others with more ZFS experience may/should weight in on this.)

      daveinfla wrote:

      I also noticed that even though I have ZFS setup using all the drives they still show up as available when you click on "Create File System". Why?
      Would there be any reason to turn S.M.A.R.T. on or would that interfere with ZFS?

      Dave
      The drives showing up as available is an issue with the plugin. (It's not perfect.) Since you know all your spinning drives are dedicated to ZFS, don't do anything to them. (Like try to wipe them or attempt to put another file system on them.) Don't do anything to your array drives under File Systems or Physical Disks, and you should be OK.

      SMART won't interfere with ZFS. If you want to use it, turn it on.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • New

      daveinfla wrote:

      flmaxey wrote:

      4. You don't need to automate a scrub.
      1. Good to know, I'm sure it mentioned that if I dug far enough?

      flmaxey wrote:

      USB drives are easy to clone
      2. I spotted mention about the thumb drive plug-in elsewhere, can you explain how you clone your thumb drives? Can it be done while the production one is in use or do you have to do this on say a workstation using a clone utility?


      3. What's involved in recovery if the flash drive fails?

      4. On these same lines, I've seen mention of using SSD drives as a cache. Is this overkill for say a 4 -5 drive (Raid-Z1) system running off of a USB flash drive?


      5. Someone mentioned AMD, I'm personally not a fan of AMD and I've read elsewhere that their support for ECC isn't fully implemented. I tried to find the original article but found the following instead:hardwarecanucks.com/forum/hard…amds-ryzen-deep-dive.html


      6. Someone please explain the versions listed on Sourceforge, I totally missed the link "Looking for the latest version?" with 3.0.86 listed and downloaded 3.0.94 at the top of the list. Is 3.0.94 beta? What about 4.0.14? Neither the download nor the Sourceforge page is very clear on whats the latest release vs. beta...
      Keyed to the above:

      1. Well,, I didn't know about the automated scrubs until a few weeks ago.. :) I had a scrub set up for once a month and was recommending scheduling scrubs to new ZFS users, when @cabrio_leo informed me of the automated scrubs. -> Scrub Some of these "features", while not hidden, are not obvious or public. In any case, the automatically added monthly scrub is enough.

      2. & 3. I clone USB thumbdrives at a workstation. If you want a continuation of your logs (not critical), you'd have to clone the working drive onto the spare, so that means taking the server down. These events also become my maintenance reboot, every month or two.
      I use 3 USB drives in rotation. (The rotation and the "why" of it is -> here )
      Since USB plugs are on the outside of a case - recovery is dirt simple; boot up on a known good thumb drive. Easy to do, in 2 or 3 minutes. For a quick recovery from updates gone bad and other events, this routine has saved me some grief.
      - First I format the destination drive at the client, then test the empty drive with h2testw. h2testw tests empty space so it's a good idea to start with an empty drive. Use "Write+Verify". (Since I couldn't find a direct download, I attached h2testw to this post.)
      - If the drive has no errors:
      I've used two methods to clone - boot a client with a Clonezilla Live cd or use Win32Diskimager. Cloning with Clonezilla is straight forward, but be sure to get the source and destination drives right. :!:

      With Win32Diskimager:
      I "Read" the source thumbdrive to a *.img file. I name the file something like OMV3.11-9-2017.img and while this file is quite large (roughly 32GB), if you keep it, it becomes an additional backup.
      (Note: If you have any Linux file format on your Thumbdrive, when you insert it into a client, Windows will prompt you to format it. Obviously, you wouldn't want to do that.)
      - Then I write the *.img file to the destination drive. (Formatting the destination is not needed, the *.img file overwrites all.)
      - Boot up on the freshly written drive. The older source drive goes on top of the case.

      ***If you use USB thumbdrives, you must use the flashmemory plugin or they'll wear out quickly. With this plugin, you can't just turn it on. Look closely at the instructions on plugin page. You'll need to alter a config file, manually, on the CLI.***

      4. While using a USB thumbdrive to boot is fine, performance wise, it's a separate issue when considering array performance. For home use, using an SSD for a ZFS ZIL cache is going a bit far. That sort of refinement is for increasing performance in high I/O production environments. Still, I suppose someone might make a case for it. :)

      5. Regarding the AMD mention:
      That was me but I'm not using an AMD Mobo for my server. The server platform is Intel i3 (not Xeon) and it supports ECC.
      (I got great deals on a fast AMD processor and an ASUS AM3 Mobo, so I built a low buck client. :thumbup: )
      The point was that some ASUS Mobo's support both ECC and non-ECC and other non-Intel OEM's may support ECC as well. But, it may require some research.
      On the article, well, some of it was on the entertaining side. To suggest that a Company CEO or a Marketing type knows what they're talking about is laughable. They know "talking points", "product hype", and "what they've been told", but little else of what's actually going on at the technical grass roots. While the article was interesting, I'd tend to go with the Mobo OEM specifications and the test results from memtest86.

      6. OMV3.X is stable. OMV4.X is the current beta. To add a note in this regard:
      In using ZFS, you're using a file system that is not native to the Linux kernel. (This is why it takes awhile to install; a module is being built to "plug into the kernel".) In a potential upgrade, from OMV3 to 4, you'd need to consider the kernel and ZFS versions you're currently using, and your existing zpools' compatibility with newer versions. I believe it would be safest to stay with OMV3.0.X until OMV4 matures. In any case I'd do VM tests, importing a zpool into OMV4 that was created on OMV3, before actually doing it.
      Files
      • h2testw_1.4.zip

        (213.86 kB, downloaded 6 times, last: )
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 3 times, last by flmaxey: edit ().

    • New

      flmaxey wrote:

      ***If you use USB thumbdrives, you must use the flashmemory plugin or they'll wear out quickly. With this plugin, you can't just turn it on. Look closely at the instructions on plugin page. You'll need to alter a config file, manually, on the CLI.***
      OK, thanks, I completely missed the instructions. I made the modifications and rebooted.
      It mentions after making the modification to "Enable the Plug-in" and reboot, I assume enabling it means to install it, as I don't see an actual enable button?

      THUMB Drives
      In reference to your thumb drive cloning, if it works, it works. I was just thinking of a more automated process, like I use on my Windows machines. I have StorageCraft (get it free through work, wouldn't be able to afford it otherwise) and it allows me to take continuous snapshots of each and every volume on my workstation and server. Should either one of them fail I can simple do a Bare Metal Restore back to the same or dissimilar equipment.

      You mentioned you use 32GB Thumb drives, is that the smallest you'd advice using? I only ask because I have a couple 8GB's laying around, but have no problem picking up a few 32GB SanDisk Ultra's, their cheap on Amazon.

      ECC & Motherboards
      I don't get Intel's logic for supporting ECC on only i3's and Xeon, but not i5 and i7's.

      My dislike of AMD comes from long past experiences of energy sucking, hot running, and major issues with drivers not being stable and AMD not keeping their drivers current. Of course I have no recent experience with them so my concerns may very well be dated...

      I remember you or someone asking me about the size of my SuperMicro board (I know there's a joke in there somewhere :) ). In any case it's a standard ATX motherboard, Model X8STE, has an Intel Xeon L5609 1.87GHz 4 Core processor (40 watts), and 16GB of ECC memory. Not to mention it's free! As I mentioned earlier the total power consumption was calculated at 164 Watts, I'm OK with that. The Termaltake MS-1 case holds 10 drives and with a 120MM fan in the front, one in the back and the power supply facing down on a wood floor it not only runs almost silent, there is plenty of cooling. I have the same case with 6 3.5" drives in my Windows server and the drives are just warm to the touch. My OS drives are around 45C, where as my 4 array disks around around 33C. The case temp is around 40C. The case will handle another two 120MM fans at the top of the case but I don't think it's necessary. The fans are 3 wire and the board has plenty of 3 wire connections.

      NEW Question - New thread?
      Does anyone have any experience with Owncloud or anything like it? I currently use my existing QNAP to backup the pictures on my phones, they have their own Android app to do this, and I'll need a replacement once I have OMV running.

      Thanks,

      Dave
    • New

      Since the thread question had been answered, it's probably time to start a new thread in General.
      (From here, you'll be looking at configuration items and issues.)
      ________________________________________

      On the flashmemory plugin, it's a good thing you caught that. Just having it installed doesn't work. (I learned that the hard way, awhile ago.) I believe the flashmemory plugin had a toggle button at one time. Now, I imagine having it installed, configured, and with the reboot, It must be working. Otherwise, the SD-cards I have in the R-PI would be finished.
      (SD-cards seem to have the least durability of all flash media.)

      On the "size" of thumbdrives:
      8GB would work, however, there are things that can happen that would fill a 8GB boot drive in almost no time. (Media servers like Plex's meta files, and Urbackup's temp files are a couple examples.) If log files or other fill your boot drive to capacity, the first indicator you'll notice is, you won't be able to login to the Web GUI. This issue will remain until you go in on the command line and make some space. With a larger drive, you'd have more time to realize that there may be a problem.
      Also, the reason for installing the flashmemory plugin in is to reduce the frequency and size of the writes to your flash media. With flash media, the number of times a single location can be written is finite. This is what wears them out (SSD's included). Good flash media has wear leveling, meaning they have a controller that's designed to write from one end of the drive to the other, before repeating. (This spreads wear.) With wear leveling and a larger drive, the number of times the same location will be written is reduced. This extends media life significantly.
      For the price point, I when with 3ea San-Disk 32GB's from Costco. ($29 when I bought them.) If you're going to use media server plugin's like Plex or Emby, I'd go even larger to 64GB. In any case, I wouldn't use anything less than 16GB. In this case (the boot drive) it's easy to clone a smaller USB drive onto a larger one and expand the file system if you use something simple like ext4.

      On other things:
      - I like the idea of snapshots too but you have to admit, being able to replace the OS AND drive complete, AND reboot in a couple minutes is compelling. Swapping a thumb drive is about the fastest recovery possible. Further, there's nothing to restore or rebuild and (re)cloning can be done are your convenience. Bare metal restorations of almost any kind take more than just a few minutes.
      - I have no brand loyalty but I believe things have changed, at least somewhat, with AMD. I got an AMD FX8320 "Black Edition" for $39. It's a hot processor (literally - 125watts) and it's sitting at 86 degrees (30c) with a 72 degree ambient. I got a similar deal on the ASUS AM3+ Mobo. I couldn't pass them up. (On the other hand, I've had the same experience with drivers that you did, back in the day.)
      - If you think about it, the i3's and Xeon's cover low and high end Servers/NAS products. These are the most significant segments of the market in terms of bulk sales and premium high end products. If Intel supported ECC in i5's or i7's (which they position as consumer/workstation products), they would be competing directly with their own Xeon processors in CPU power. Hence, with ECC capable i5's and 7's, a high performance server for medium and large businesses could be made on the cheap. Like M$, it's just a pricing model where it's thought that medium and large business can, and will, pay a premium for performance.
      _______________

      Before starting a new thread - what brand/model of server (Dell,Compaq/model#) did you strip the Mobo and processors from? 2U is full ATX? Great. Along these lines, I might follow your lead. I almost did but finding a location in the house for a "2U pizza box" would be a pain, even in a closet. Repacking the Mobo and other server hardware, in a large low priced case, was a great idea.

      Thanks
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey: edit ().

    • New

      flmaxey wrote:

      the reason for installing the flashmemory plugin in is to reduce the frequency and size of the writes to your flash media. With flash media, the number of times a single location can be written is finite.
      That's not the problem with the flash media we're talking here about since even the cheapest products (even the counterfeit crap) implements at least primitive wear leveling.

      The real problem is 'write amplification' and that writing 1 byte at a time at the filesystem layer to any flash media might end up with magnitudes more data being written at the flash layer (that's all write amplification is describing). That's what the flashmemory plugin is used for: reducing write amplification by magnitudes even if set up in a way that all changes are synced to disk hourly (which is something I would recommend since otherwise contents in RAM are only written back to 'disk' at shutdown and lost when the server crashes)

      Without the plugin writing 5 bytes of log contents every minute to a flash media that uses 16KiB internal page size might end up with almost one MiB being written within every hour on a drive that is entirely full and does not implement TRIM (which applies to my knowledge to all USB thumb drives out there except those that are in reality mSATA or M.2 SSDs in a very recent USB enclosures with latest firmware upgrades applied).

      So it's important to understand write amplification and on media with small capacity why TRIM is important (since flash media not implementing TRIM starts to get a horribly high write amplification as soon as the whole data written to the decive over its lifetime exceeds its native capacity -- then all writes end up in read/modify/write cycles combined with at least one Erase Block being freed and overwritten -- all flash media has some spare pages for this purpose).

      TL;DR: With 'dumb' thumb drives or SD cards for the above reasons (write amplification too high, no TRIM availability and also no SMART attribute available to query the drive's wear-out indicator) flashmemory plugin is mandatory while with SSDs it's simply a good idea to use it since reducing write amplification in any case with an OMV use case.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      tkaiser wrote:

      flmaxey wrote:

      the reason for installing the flashmemory plugin in is to reduce the frequency and size of the writes to your flash media. With flash media, the number of times a single location can be written is finite.
      The real problem is 'write amplification' and that writing 1 byte at a time at the filesystem layer to any flash media might end up with magnitudes more data being written at the flash layer (that's all write amplification is describing). That's what the flashmemory plugin is used for: reducing write amplification by magnitudes even if set up in a way that all changes are synced to disk hourly (which is something I would recommend since otherwise contents in RAM are only written back to 'disk' at shutdown and lost when the server crashes)
      That's exactly what I was getting at, if written in shorter form. The plugin, along with wear leveling and reasonable sized flash media will result in longer life.

      However, the finer points are appreciated.
      Thanks.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • New

      flmaxey wrote:

      Before starting a new thread - what brand/model of server (Dell,Compaq/model#) did you strip the Mobo and processors from? 2U is full ATX? Great. Along these lines, I might follow your lead. I almost did but finding a location in the house for a "2U pizza box" would be a pain, even in a closet. Repacking the Mobo and other server hardware, in a large low priced case, was a great idea.

      My company retired QTY. 2 - 2U SuperMicro servers, yes they make both motherboards and servers, not to mention other accessories.
      They were retired several years ago, I used one for VMWare for a while, the 2nd has always been a spare. I got tired of the noise coming from my network closet, the 2U chassis is a screamer with 4 small fans + the PSU, so I decided to build a silent dedicated Windows server last year, continuing to use my 2 Bay QNAP for backups.

      My data foot print has grown and I need a larger NAS, which is where stripping the SuperMicro 2U came in. One to build with, one as a spare.

      The motherboard is pretty much a standard ATX Server board, no manufacture specific plugs or wiring, so I can use it in any ATX case. NOTE: Not all manufactures boards are like this, Dell and HP are usually custom boards. Before the ASRock and other Atom mini boards came out SuperMicro was one of the preferred server boards to use for NAS builds, they may still be for some?

      I'm putting the board in the ThermalTake Mid-Tower Commander MS-I amzn.to/2AJyyi3, which like I mentioned will support 10 3.5" drives AND I forgot to mention a 2.5" drive at the bottom of the case. It comes in Black and a Snow White. The Black has a weird USB 3.0 connector for the front. Instead of a standard motherboard connector for the motherboard end it uses a USB A-Male which on most boards you'd have to pass it through the back of the case and plug it into one of the external USB ports. Check out the pictures here bit.ly/2BvtTn6. Why, I don't know, but coincidentally the SuperMicro board has two A-Females on the board surface so it will work out for me. The Snow White version has the standard motherboard connector. When I used the black case for my Windows server build I had to buy a converter to plug it into the motherboard. I was NOT routing it to the back of my PC via a card slot, that's just wrong...

      Here's the motherboard, it's old but it works: supermicro.com/products/motherboard/Xeon3000/X58/X8STE.cfm

      I see Nextcloud has split off from ownCloud, I guess I should post a new Thread for that or look for an existing one...

      Thanks,

      Dave