Converting EXT4 to BTFRS

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Converting EXT4 to BTFRS

      Hey guys,

      Currently I have 3 HDD's in my server (x2 8TB & x1 12TB) all using EXT4 (no raid) and was just looking at BTFRS and wondered is there any benefits swapping from EXT4 to BTFRS (or any other filesystem such as XFS) and if so can I convert without data loss and although I have offline backups it would be a pain to transfer all of this back over. Just wondering as BTFRS is newer (and I suppose will become the new standard at some point??) is it worth it? Or do you think EXT4 is the better file system out of those on OMV, just wondered what your opions are.
    • BTRFS has checksums and COW (copy on write) and snapshots. Great stuff. But only if you know how to use it and actually intend to use it. It used to be a little flakey in some circumstances, but that may be fixed now?

      EXT4 is faster, I believe. But for a NAS it is the network that usually is the bottleneck, not the filesystem. Unless the filesystem is NTFS. EXT4 is very stable and boring. Boring is good for a filesystem.
      OMV 4, 7 x ODROID HC2, 1 x ODROID HC1, 5 x 12TB, 1 x 8TB, 1 x 2TB SSHD, 1 x 500GB SSD, GbE, WiFi mesh
    • If you go to BTRFS, you'd better be running your NAS on an UPS. Otherwise you're going to be seeing "Error = parent transid verify failed" real soon.
      Hint: If your file system is not set "read only" or "bricked" by the above error, you might need the following commands to resurrect it.

      btrfs scrub

      (If that doesn't work.)

      btrfs-zero-log

      I'd say, "Good Luck" but I don't really believe in it. Rather than rely on luck, perhaps you might consider looking at the BTRFS project page.
      (This is what they'll admit to and it's been in roughly the same state for years.)

      The post was edited 1 time, last by crashtest: edits ().

    • /Ky wrote:

      just looking at BTFRS and wondered is there any benefits swapping from EXT4 to BTFRS
      You can learn about benefits for example here or here. Not just Synology let their users benefit from btrfs but Netgear's ReadyNAS devices rely on btrfs since 2013. QNAP marketing in a try to differentiate from their competitors focusses on allegedly 'low btrfs performance' and tries to explain why choosing ext4 based QNAP NAS is the better variant (IMO a lot of marketing BS spread there).

      Some people say btrfs would be dead in the meantime since Redhat pulled btrfs out of their RHEL releases few years ago but does this really matter? Not at all.

      Some people made some experiments with most horrible 'NAS setups' possible like Raspberry Pi with USB disks behind USB hubs and came to the conclusion that btrfs would be unreliable especially with sudden power losses while other people working with enterprise storage gear report exactly the opposite: Had HP pull half of my SANs offline while in use over a weekend, ext4 partitions came out of it with various amount of damage, btrfs came out unscathed.

      You'll find 'wisdom on the Internet' explaining why every single filesystem out there is a mess and leads to data destruction. If you check the facts it's often people having experienced stuff when software was not ready (see all the nice XFS drama experienced by people using kernel 2.6 and so on).

      Also what adds to the confusion is that a lot of people hate change in general and ignore the 'quality aspects' of modern filesystem approaches. If you get data corruption due to a crappy hardware setup with old filesystems from last century issues remain undetected until it's too late while 'checksumming' filesystems like btrfs or ZFS will point at data corruption almost immediately (even preventing you from mounting the filesystem to prevent further damage and to force you to fix the underlying issues first).

      How does an awful lot of users deal with such an approach? Instead of realizing they use inappropriate hardware that corrupted their data they love to blame the software (the filesystem in this case) and tell stories like 'no issues with ext4 but constant hassles with btrfs/ZFS'. It's always also an issue of ignorance.
      No more contributions to this project until 'alternative facts' (AKA ignorance/stupidity) are gone
    • tkaiser wrote:

      flmaxey wrote:

      If you go to BTRFS, you'd better be running your NAS on an UPS. Otherwise you're going to be seeing "Error = parent transid verify failed" real soon
      On what is this advice based?
      Personal experience. Try running BTRFS without an UPS. If you're in an area where AC line hits or outages are fairly common, it shouldn't take too long.

      Or, as I've noticed, you're one who tests everything. (Which is always prudent.) With BTRFS on a data drive, start a data copy or spooling files (video files, something large), and pull the AC plug. See how many times it takes before Error = parent transid verify failed pops up, and what it takes to correct it.
      __________________________________________________________________


      I was using BTRFS as a data drive FS, in a portable application, on a media server for the traveling (an SBC and an external drive). I chose BTRFS because of it's low resource requirements and "Copy on Write" capabilities where, supposedly, it's unlikely to lose data. In the end, while the data might have still have been on the drive, "technically", if the file system's house keeping processes lock the user away from it (Error = parent transid verify failed) the net effect is the same - data loss. After a couple volume rebuilds, I went back to EXT4. In the same application and on the exact same hardware EXT4 was, and is, rock solid.

      To be fair, without an UPS, ZFS might do something similar but I wouldn't run ZFS or any of the other advance file systems at the mercy of the AC line. While advanced file systems have lots of great features, they appear to have practical limitations as well. BTRFS is no exception.
      __________________________________________________________________

      Adoby wrote:

      I believe XFS is even more stable and boring than EXT4. Nice if you have big servers and big data.
      Depending on the application, there's something to be said in favor of "boring" file systems. Mature file systems, with lots of well developed tools and processes for fixing them, can be a good thing. And, if video content is considered, almost everyone has "big data".
    • flmaxey wrote:

      I was using BTRFS as a data drive FS, in a portable application, on a media server for the traveling (an SBC and an external drive)
      I know, your whole 'personal experience' relies on some tests with a Raspberry Pi and USB attached storage. That's the real problem.

      The RPi is the most crappy platform possible for a 'NAS'. Disk storage can only be attached by USB, the RPi's USB implementation is a little quirky and since there's always also something like an USB-to-SATA bridge involved in the data path you can never trust in 'correct flush/barrier semantics'.

      CoW filesystems rely on data written to disk REALLY being written to disk when 'the disk' sends an acknowledge (and not the data still living in a cache along the data path while returning 'data has been saved to disk'). Using CoW filesystems on USB attached storage of questionable implementation is simply asking for troubles. The whole approach is broken by design.

      SATA/SAS have checksumming mechanisms to protect data integrity on the wire, the connectors are more reliable and there shouldn't be issues with flush/barrier semantics unlike with USB attached storage where it's more rule than exception.

      So if you want to repeat your 'storage on the go' experiment skip those lousy RPi jokes for this and choose for example an Olimex A20 Lime2 as SBC (real SATA and real UPS mode -- the latter of course NOT needed for btrfs). And please also stop spreading 'btrfs needs an UPS' since that's simply not true.
      No more contributions to this project until 'alternative facts' (AKA ignorance/stupidity) are gone

      The post was edited 2 times, last by tkaiser ().

    • henfri wrote:

      The wiki did:
      btrfs.wiki.kernel.org/index.ph…t3&diff=32569&oldid=30301
      Thank you, I totally missed the context (conversion from ext4 to btrfs).

      @/Ky BTW: I'm not trying to encourage you to switch to btrfs (on x86 I prefer ZFS for example). Just wanted to add some more food for thought.
      No more contributions to this project until 'alternative facts' (AKA ignorance/stupidity) are gone
    • I appreicate the feedback guys :) I have done a lot of reading and rightly so there is mixed opinons knocking about for now I think I'll stick with EXT4 until BTFRS becomes more mainstream as I dont see the point in upgrading yet when I wouldnt see the benefits I dont think. And thats why I was lookign at XFS also as I have quite large chunks of data with my server mainly handling media.
    • /Ky wrote:

      And thats why I was lookign at XFS also as I have quite large chunks of data with my server mainly handling media.
      XFS today and ext4 are the most robust filesystems available on Linux. But with your use case the filesystem of choice doesn't matter anyway since it seems you won't be doing permanent snapshots or need data integrity (media file formats can cope pretty well with bit rot, if a flipped bit does not affect internal structures all you'll get from corrupted data are artifacts on screen when watching later). Simply use any of the available POSIX compliant choices or in other words: stay with ext4.
      No more contributions to this project until 'alternative facts' (AKA ignorance/stupidity) are gone
    • henfri wrote:

      @tkaiser
      The wiki did:
      btrfs.wiki.kernel.org/index.ph…t3&diff=32569&oldid=30301

      tkaiser wrote:

      Thank you, I totally missed the context (conversion from ext4 to btrfs).
      Still don't get it. There was a warning in 2016 based on the fact that the conversion tool was seldom used and this warning has been removed in 2018. Where do I miss the point?
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • tkaiser wrote:

      @/Ky BTW: I'm not trying to encourage you to switch to btrfs (on x86 I prefer ZFS for example). Just wanted to add some more food for thought.
      Yeah, this sort of thing is real useful to beginners, "food for thought". Thanks for chiming with that.

      /Ky wrote:

      Thanks guys for the advice was looking at XFS also (and yes I have a UPS) but tbh it looks like I wouldn't really see much of an advantage but I agree I think EXT4 is better to stick with from reading :)
      This is this a safe choice and it's good that you have an UPS. With full backup which you have, looking at some of the advanced file systems might make sense later on.
      Until then, you can get many of the same benefits, from rsnapshot and SNAPRAID, and "RAID like" drive aggregation with UnionFS. All are available as plugin's in OMV4.

      The post was edited 1 time, last by crashtest: edit ().

    • tkaiser wrote:

      (real SATA and real UPS mode -- the latter of course NOT needed for btrfs). And please also stop spreading 'btrfs needs an UPS' since that's simply not true.
      Truth is a matter of perspective and a subject for philosphers and the sages. My saying "BTRFS needs an UPS" is every bit as anecdotal as your saying "that's simply not true".

      Here's the thing to note if one is objective, ALL PC's, regardless of operating system, need an UPS. Otherwise, one is gambling with their data. Also, I'm absolutely sure that you're using an UPS on any system that has data you want to keep.

      So, yes, I plan to continue to "spread the word" that BTRFS, ZFS, really all of them (to include EXT4) need to be on an UPS.
    • flmaxey wrote:

      ALL PC's, regardless of operating system, need an UPS
      You missed decades of filesystem engineering, you missed what CoW (Copy on Write) is for (data consistency), you missed that CoW filesystems like btrfs and ZFS have problems with crappy USB storage (all your personal experience with btrfs seems to be based on) since "data really written on disk" is nothing crappy USB storage can provide.

      Please just think a single second about why the btrfs benchmarks on USB storage here are failing: phoronix.com/scan.php?page=art…inux-50-filesystems&num=3

      Hint: no power losses involved, no UPS needed, only less ignorance needed.

      BTW: talked to a friend recently responsible for IT infrastructure of one of the world's largest agencies. To my surprise they don't use UPS any more in their data center since... too expensive and it just works. There's something called progress and CoW filesystems are part of this.
      No more contributions to this project until 'alternative facts' (AKA ignorance/stupidity) are gone
    • So not using a UPS is a sign of progress? :) The grid works, until it doesn't, and it doesn't matter what country you're in. It's a matter of time. If doing away with UPS was your friends idea, perhaps your friend should keep his resume current.

      tkaiser wrote:

      You missed decades of filesystem engineering, you missed what CoW (Copy on Write) is for (data consistency), you missed that CoW filesystems like btrfs and ZFS have problems with crappy USB storage
      Um, no, you're wrong on this account. After eliminating BTRFS as a possible contender, I choose ZFS mirrors and set up automated snapshots, strictly for data consistency and preservation. And what I've discovered of SBC's, is that most of them are novelty hardware with way too many caveats and idiosyncrasies among the various models.

      On the engineering end if it - to "miss" something one would have to want to be part it and, frankly, what goes into filesystem engineering is nowhere near as important as the end product. One doesn't have to know about the engineering minutia that goes into a car, to know how to maintain a car, drive it and recognize that one model, relative to others, is lacking. In the same analogy, BTRFS is like a car, with windshield wipers that are "mostly OK" when it's raining. It's also like a first model year new car - common sense dictates that buying it is foolish until the bugs are worked out.

      I could hazard a guess at what you've missed in the past few decades but the observation wouldn't change anything.

      tkaiser wrote:

      Please just think a single second about why the btrfs benchmarks on USB storage here are failing: phoronix.com/scan.php?page=art…inux-50-filesystems&num=3
      And yet there's two much more important points you seem to be missing, in the very same article: BTRFS performance is between poor to abysmal - it was dead last in almost all of those tests and it was the only file system that had problems with USB connected media. On the other hand,, I suppose those characteristics might make BTRFS better suited for home use,, in a world where elephants are pink.

      tkaiser wrote:

      Hint: no power losses involved, no UPS needed, only less ignorance needed.
      There are many types of ignorance and, notably, some of the more important levels of it have nothing to do with tech subjects.

      Take this thread we're in right now, as an example:
      It has your exhaustive explanations and links to articles you read on the internet somewhere,, all of that to declare in your final analysis - "BTW: I'm not trying to encourage you to switch to btrfs" AND "XFS today and ext4 are the most robust filesystems available on Linux. AND But with your use case the filesystem of choice doesn't matter anyway" /--/
      I mean,, really? Could there be any point in all that other than to try to endorse a file system, that you're not recommending? :) This is the stuff of true comedy, If one has the time to sift through it.
      _______________________________________________________________________________________________

      And there's the "voluntary selective blindness" and selective "cherry picking" of internet articles that support a myopic point of view:
      Why don't we look at real world BTRFS experience, several years in fact. And you've probably read this but, just like the dismal performance in the article, you chose not to "see it" for some reason. (The "NAS use case" is specifically mentioned.)
      Maybe there's something wrong with the hardware, maybe this Adobe employee missed decades of file system engineering, or it's just ignorance? Right.... :)
      _____________________________________________________________________________________________________
      (From the BTRFS proposal thread:)

      **[elfhack](/elfhack) ** commented [a day ago](#issuecomment-466704336) •
      I'll just chip in.

      I've been using btrfs in production for more than 5 years, on SLES. Feature wise it's great. Snapshots are really a killer feature, and I absolutely understand why @votdev might want to base omv around this, because I did the exact same thing, giving my developers access to instant snapshots functionality of our production app (Adobe AEM if anyone cares - java app which dumps ridiculous amounts of data) Having silent corruption protection is also a great bulletpoint.

      BUT if I could turn back time, reverse my decision, and use anything else that gave me that functionality, I would in a heartbeat (ZFS is not and will not be in mainline, so I don't care for it). btrfs absolutely destroys performance, especially when there are many snapshots of directories that contain rich metadata (think lots of nested folders with small files). At times, it would take us half an hour to delete a directory - or you could replicate this by doing what any Mac user would experience if he made a time machine backup - create about 100GB in 8MB files, and do an ls in that directory. Having a cup of tea is recommended. One instance couldn't take more than one snapshot before giving back ENOSPC errors because the metadata couldn't grown, and no, running btrfs balance did not fix it (or at least not reliably).

      To top it all off, I recall seeing some kernel dev some (long) time ago explaining that it's not a case of raid56 being broken now, but more a on-disc format design fault that can't really be fixed with a simple patch, though I can't find it atm, so take that as you may. From my perspective, the devs have taken a bunch of decisions in the wrong technical directions, and I have little faith that things will become better at some point, if at all.

      In other words - if OMV 5 will really be btrfs-exclusive, I simply won't upgrade, because I worked with btrfs long enough to not trust it with a workload that a NAS should have. Though again, I would fully understand the decision, because there is nothing else available in Linux that gives the functionality that btrfs offers. Caveat Emptor

      The post was edited 1 time, last by crashtest ().