Posts by ellnic

    Well, having thought about how I can best work round this for now, I have come to the conclusion that a separate email and app with instant notifications are the best option in the meantime. My default mail app only has audible notifications and badges because I get too much mail already. This approach would increase that amount, so it needs segregation. Will trial and see how it goes

    Just in case anyone else would like to do this, on my iPhone I did the following:

    1. Create an email account just for OMV
    2. Download Edison Mail app from App Store - because it has awesome features
    3. Add email account to it
    4. Login to OMV via web and change email to newly created one
    5. Set OMV notification prefs to Instant notification via email for all required [in the forum CP]
    6. Set Edison to have notification banners for all
    7. Set unique alert tone for Edison to differentiate between it and normal mail or Tapatalk
    8. Set Edison to open links via your chosen browser or its own
    9. Disable Edison’s organise mail by thread option

    Time will tell, but this should hopefully provide a better experience than manually checking.

    +1 I am checking manually [and not regularly at that] but am missing a ton so the forum experience is miserable for me. :-(

    Just to add though, the plugin is active but it needs troubleshooting. Something to do with the forums anti spam is interfering with it. Specifically, the Tapatalk devs logged in and can see the the forum is throwing a 500 error to the app.

    The Tapatalk devs have said they will help and are waiting ready... BUT - they need the forum logs to see why the error is being thrown. I have requested these and so have the devs directly.

    Last and certainly not least, I put a lot of time and effort into getting the original problem rectified. I recorded videos, tested tested tested, hassled the Tapatalk guys even though they were adamant that the forum is to blame. I feel extremely disappointed that we have fallen at the last hurdle. This is not meant to upset, but just how I feel. There is nothing more I can do.

    How can I know if current kernel is in Hold status or not ?Ther is no visual indication on GUI.

    May be you can disable the button that is not applied. I.E:
    - If kernel is in hold status, only Unhold button is enabled, and opposite...

    You can see what is held with:

    apt-mark showhold

    For example, I see:


    Not sure how easy the variable hold button would be to implement, but another option is a "Show Status" or "Show Holds" button that gives the output of above command.

    If being officially considered by me, then yes :) I tried to convince Volker to swtich for OMV 4.x.

    I wonder if this will happen for OMV 5.... @votdev is this a possibility? :)

    The ACL add-on is a "patch" that's stored in extended file attributes. There can be interesting and odd effects when basic and extended permissions clash.

    I don't use ACLs. :) That might be why I see the permissions in my pools as less of a headache.

    I'd argue this one with you until the CoW (file systems) come home. :) I don't see any array as "redundancy". I see an array (regardless of flavor) as a single disk and a single point of failure. I'm only using a ZFS mirror of (2x4TB) for convenience, so I can have 4TB of storage with 100% bitrot protection. Functionally (while I'll grant that a mirror is better) I'd see a single 8TB disk with copies=2 as the rough equivalent of a 4TB Zmirror. In either case, using a Zmirror or a single 8TB basic volume, I'd have full data backup on an external host. The backup on the external host is what I call redundancy.

    But you're sacrificing a ton of possible speed not using multiple drives. Also, a failing drive is more likely to have both copies wrecked. Copies=2 is a special use case parameter, if you ask me. I only use it for drives that are offline the majority of the time, where they are stored in a space restricted location - ie. the bank.

    Even in a single disk scenario, I used BTRFS tools to recover from just a small bit of file corruption and, well, I'm not sure exactly what happened other than zeroing logs/counts and, maybe, resetting checksums on potentially corrupted files that were still corrupt. ((And of all the simple things one would want in such a scenario; it proved to be impossible to find out what the names of the corrupted files were, so they could be replaced.)) Fortunately, the disk in question was just one of 3 backups so a real recovery was easy enough.
    For my purposes, that experience alone will be enough for me to keep BTRFS at arms length until the "as yet to be discovered" bugs in the file system and it's utilities are unearthed and patched. The utilities themselves need more refinement.

    While I'll be the first to admit that I can't predict the future, I still think the future of BTRFS is far from certain. There's this principle concerning "time", that comes into play, when excessive amounts of it are wasted. It's called "being overtaken by events". There have been file systems that had great promise, that didn't deliver, and fell to the wayside while others took their place. ReiserFS/Reiser4 comes to mind. It had SUSE, a heavy hitter, and other corporate sponsors as well. (On the other hand, when the developer murdered his wife, "that" didn't do much to help the project. :) )
    In the bottom line, I'm going to stick with ZFS because, for my purposes, there's no other viable choice.

    Still, I can't help but wonder at the state of file systems in general. As areal densities keep increasing, with OEM's stuffing more bits onto platters by packing them in tighter, one would think that bitrot protection would have come to the forefront when drive capacities exceeded 2TB. With areal densities now reaching insane levels, in the 8TB and up range, I believe "bitrot" will become a much more well known term in times to come.

    Yeah it's BTRFS headaches like that which make me stick with ZFS for the time being. A friend of mine thinks HAMMER is great, but he's not a Linux guy. I know very little about it tbh, but from what I can see it looks fairly good.

    I'm not trying to pick apart your post, but it's easier to multi-quote for a large chunk:

    Frankly, I'm not a fan of ZFS. Out of the box, ZoL requires a handful of tweaks to "approximate" Linux permissions.

    Do you mean permissions for data in your pool? Ie. Files and folders - POSIX permissions? If so, these don't differ wether you use FreeNAS or OMV. They're just POSIX permissions.

    However, in the search for something that would protect long term data stores; if one wants integrated protection from bitrot and other silent corruption, ZFS is the only viable option available.

    Well, BTRFS will do this, it just doesn't have RAID5/6 and isn't as mature (and VM performance sucks the last I checked) so isn't suitable for all. But yes, if you need RAID5/6, want a proven track record and use VMs, then ZFS is it.

    It's not even necessary to have a Z-array for protection - a basic volume would work, with copies=2 set, on sensitive filesystems.

    For pure bit rot, yes. But you'd be better off with a mirror for some redundancy.

    I'm not making more out of bitrot than it actually is. A flipped bit in a picture and a single pixel changes color. In a document an extraneous character may appear, a word may be spelled incorrectly, etc. However, the effects (cumulative) add up over the long haul which, I believe, should be given due consideration.

    If only this were true. Sadly, a single pixel or char equates to way more than one bit (a pixel is usually 1 byte = 8 bits). With chars, you have size, font etc. See this:

    The 2nd image is pretty much half gone, and is just one bit flip. The worst is 3 bits. Think about it more like a Jenga tower of neodymium magnets all aligned with their poles in the 'correct' direction for stability of the tower. Flip a magnet and...... :cursing:

    For businesses of any size, I can't see where they could chose anything else (other than ZFS) at this point in time. Automated snapshots allow for the retrieval of business sensitive files, accidentally or deliberately deleted, for up to a year. And, snapshot retention can be adjusted for even longer periods. For that reason alone, ZFS should be supported for small and medium sized business use cases.

    Don't forget Windows admins. They have reFS and shadow copies. I wouldn't trust it, but that's the Win equivalent.

    BTRFS? It "sounds good" but what has been delivered is nowhere near living up to what has been promised. Because it's native to Linux, provides bitrot protection and has great (theoretical) features, I really "wanted" BTRFS for data storage. However, any objective assessment of the project leaves one with the idea that it's going to be a long time before the issues are worked out, if ever. (After being on the mailing list for some months, in times past, it seemed as if the clean up of the issues plaguingBTRFS was going nowhere.) In any case, BTRFS wouldn't be the first file system to be abandoned due to development delays and the loss of interest during/after an extended development period.

    I totally agree that it's a bit of a disappointment in terms of it's current state and the time it's taken to get there. But it's got almost zero chance of ever being abandoned and will undoubtably mature to the point where it's like ZFS - yet brings it's own goodness (shrinking pools for one). With the likes of Facebook using BTRFS and contributing heavily to it, it's not going away any time soon. The problem is, companies like Facebook don't use RAID5/6, so that's getting sorted at a snails pace.

    And it's not as if I "like" ZFS, now that I'm familiar with it.

    I think it's the best of a 'bad bunch'. ZFS takes a bit of setup and doesn't integrate as well, yet it's proven and solid. BTRFS plays very well (only with Linux) but still has kinks. I also like the the command structure and command output of ZFS over BTRFS. As I mentioned before though, if BTRFS matures more and gets ported to Unix, I'll jump. What I really want is Bcachefs... wouldn't it be lovely if that matured and was available on Unix/Linux/Windows? :P<3

    Users in production environments, small businesses and the like, don't like doing version updates every year. That's what LTS is all about. Setting up and actually using the server or workstation for awhile.

    With the configuration worked out and without the need for more features, do you think that users are still running OMV2?
    (I think all would be surprised at how many are still running OMV1.)
    I'm guessing a good part of the user base is still on OMV2 and there's nothing wrong with that. With that noted, being able to update the underlying OS of OMV2, Samba and other packages, keeps older versions more secure and viable for longer use.

    In the bottom line, if older versions are still in use (and they are), it's better if the internals are up to date and secure.

    This is purely from a Debian point of view, but Wheezy was officially EOL on 31st May so it isn't secure any more. It would be advisable for any who do use OMV 1 and 2 to jump. OMV 3 will be EOL from Debian's POV on 31st June 2020. Since there's no upgrade path from OMV1/2 onwards, it would be advisable to go straight to OMV 4 if you don't need OMV 3 plugins.

    @ryecoaaron so is this something that's being considered? Ubuntu I mean. Or is this just a wish list thing at the moment? :)

    @sbocquet to my knowledge, older kernels with newer ZFS versions is not an issue. It was just an issue with 4.16 needing 0.7.9.

    @flmaxey A couple of years ago I wouldn’t have considered Ubuntu as an option for anything critical or stable.. in recent years that has changed. I’ve been thinking quite a bit about it over the last couple of days and the more I think about it, the more I think Ubuntu may actually be a better choice for OMV 5.

    Some boring stuff you probably don’t need to know but led to my original opinion of Debian/Ubuntu, and my opinions now. I’m a multi OS user but prefer Mac OS. With Linux, I’ve tried a ton of them. I first toyed with Linux back in the days of yellow dog. It was a rather miserable experience, and involved a lot of manual everything. Bearing in mind that yellow dog was for PowerPC chips, I was left wondering what the hell I was doing playing with a piece of junk like that when System 8 was so much more pleasing to use, despite its flaws.

    I left Linux alone for quite a while and revisited many years later with Ubuntu, back when they used to drop CDs in the post free of charge. I liked Ubuntu because it was very different from other distros in terms of its ease of use - getting up and running, and friendly user base. I owe Ubuntu for getting me really interested in the benefits Linux could offer and for a time I stuck with it. After a time, I tried Debian and it was like a breath of fresh air in terms of stability.

    Linux frustrates me at times... especially when I look at the fragmentation. In many respects, I agree with Bryan Lunduke and his Linux Sucks spiel. In reality, there are a ton of amazing fragments of Linux, and only a tiny handful of distros actually worth considering. These boil down to:

    • Workstation distros - the most up to date packages in the repos, great GUIs and stable - assuming that you want a bit more than a ‘just functional’ workstation. Personally, I love pleasing GUIs. That’s why I love MacOS.
    • Server distros - rock solid, easy to administer/maintain - who really cares what it looks like, you probably don’t spend all day logged in anyway

    Things to consider:

    • Developer/user base
    • Commercial support or backing
    • Use or avoidance of anything non FOSS
    • A possible crappy package manager
    • Amount of packages out the box unless you want to compile everything

    For a workstation, Debian isn’t my ideal choice. It’s solid, but solidity in the Linux world often comes at a price of old out-of-date packages. Debian is, however great for servers. Sure, you can use testing or sid... then it’s not stable. Fedora is bleeding edge.. but it’s unstable and is mostly FOSS. Why they push that for servers is beyond me. Any distros I try, I really try to like. And I think I’ve experienced ridiculous out of box bugs with Fedora more than any other. That brings me to things like Arch and derivatives. Excellent for workstations if you want the best of Linux but can be problematic if you don’t update it every 5 seconds (and even if you do)... not good for servers. Both Debian (and derivatives) and Arch (and derivatives) have great package managers though. Something which a ton of other distros still lack, quite sadly. Anyone would think we’re still in the 90s.

    Looking at the current top 10 on distro watch over the last 12 months, I see the following and here are some very incomplete, very biased and probably very shallow thoughts on them:

    • Mint - Urgh. Thanks for cinnamon, I think.. wait.. I’ll have KDE. Well, thanks for Mate, actually no I’ll take XFCE. Also, I can’t use a distro called Mint. Why don’t people think about the names they call their projects? No one is going to take you seriously when you say your servers run this... or workstations.. or anything. What distro do you use? Werther’s Originals. Also, small dev base compared to the likes of Debian
    • Manjaro - I love Manjaro, but not for servers. For the most part they seem to be getting it right. Glad to see the project doing so well. Nice extras for noobs too, like Kernel chooser GUI. It would be fair to say they’re also a small dev base, but then most are when compared to Debian.
    • Debian - the mother/father/great uncle/cousin etc of Linux as we know it. Epic package manager, tons of packages, easy to admin and maintain, HUGE dev team and user base. Everyone makes deb binaries available for their projects. Solid solid solid. Auto config/setup on package installation. Not too up to date though, without moving branches, which sacrifices stability. Silly thing is, the volatile nature of sid (that you would need to get the newer packages) actually makes it less stable than Arch based distros. Easy to package for. Easy to host repos.
    • Ubuntu - we all know it... money makes things move. Does this mean there is no good FOSS without corporate backing? Of course not. But money makes development move. Ubuntu is a lot more stable than it used to be. They seem to have got their silly ideas (Unity!) out of their heads and are moving back in the right direction. I was surprised to see Ubuntu as a supported OS on HPs site for the Gen 8 microserver alongside Red Hat and yet not Debian. They’re pushing themselves very heavily as a commercial solution. A ton of Debian’s plus points and now that stability has improved, very little to complain about. Better LTS model, newer packages, commercial support if needed. PPAs, uses apt and debs. Seems to be providing the goods... took a legal risk and won... ZFS.
    • Solus - I’m not too familiar with it but I’ve read the package manager sucks and it’s MORE fragmentation. Budgie looks nice, but it’s more dilution of coding efforts in the community.
    • Antergos - call me shallow but I hate the name. Its gross. ;-) It’s arch though, so pretty much all the plus/negative points that brings with a GUI installer. I think I prefer Manjaro.
    • Elementary - What? NEXT!
    • Fedora - Red Hats loyal group of butt monkey Guinea Pigs. Yeah you get cutting edge. You also get bugs bugs bugs. Limited to FOSS without additional repos which can, and often do, break things. Yum is slow and sucks. Dnf is better, but is still slow and inferior to apt. Dnf and yum do have one of the nicest/tidiest outputs though and I really wish apt would look like this.
    • OpenSUSE - very very very easy to administer but YAST will make you lazy. Things do tend to work very well out the box with SUSE, though. They’re also totally in bed with BTRFS and seem to be anti ZFS. I haven’t used it a lot in recent years but it’s good to have both tumbleweed and leap options. Another thing I noticed was that YAST in openSUSE is not the same as SLE. It seems to be missing a lot of modules. SLE is much faster than openSUSE. So openSUSE is more like Fedora than Centos. It’s still stable though, the best RPM distro IMO. Thanks for the open build service.
    • TrueOS - not Linux.

    There’s only 2 distros on that list that I would want to use for a server. That’s Debian and Ubuntu. SUSE is ok if you don’t need ZFS and don’t mind compiling a lot. More devs offer apt packages and even Red Hat than SUSE. Not a massive issue, I know. But it’s all the little things.

    Other mentions:

    • Centos - Redhat, more out of date than Debian a few years in. SLOOOW release cycle. Red Hat - and now owned by Red Hat.. not really Linux. It’s a business.
    • Oracle - just.. no.. really. Just no. Just release ZFS then disappear. And take Java with you too.
    • Source Based (Gentoo etc) - love using them, great to tinker with but so impractical and time wasting unless you make custom digital juke boxes or kiosks.
    • Unix - limited hardware support
    • All the other distros where they’ve chucked their choice of packages together with a nice theme and a crappy name - why?!

    So, IMHO, Ubuntu and Debian are the only choices for a decent solid server OS. OMV 5 must have ZFS. Ubuntu makes this easy and would eradicate a lot of the issues we see installing / updating Debian and keeping ZFS ticking. It may well be the better choice.

    It does work with the latest kernel. It just doesn’t work with the latest kernel/version of ZFS combo in Debian’s own repos - this will change - soon.

    In the meantime you can enable OMV extras testing to get the latest ZFS version, which works just fine with kernel 4.16. It’ll take you 10 minutes tops.

    You should ideally export a pool before moving it to a new OS, but it’s not essential if you absolutely know it’s not in use elsewhere. It’s just not best practice to skip that step. You can use the -f flag to force import. You don’t need to remember the name of the pool if you’re importing. You can easily find out... Just use:

    zpool import

    to see what is available to import.

    Once you have the name of it, you can:

    zpool import -f poolname

    Or there is an option in the GUI to force. You can do all of this via GUI, infact. There is a switch to import all available. You’ll want that if you don’t know the name. You’ll also most likely need to force import as ZFS will think it’s still in use on another system if you didn’t export.

    I would highly recommend finding out why your ‘system’ became read only. I assume you mean your pool was read only? I.e. a permissions problem. You could have fixed this in 20 seconds with chmod and chown. If it is a permissions error, nothing will change in OMV 4... your pool won’t automatically become owned by your new systems user account. You need to issue commands to fix/alter it.

    Get up and running and get your pool imported first then post again if it’s still read only and we’ll try and sort it.

    It’s usually seconds. I’ve upgraded all of mine.

    I think the concern is that if you upgrade: will you always be able to get those drives in a system of at least that version? Probably. There were no additional upgrade prompts on 0.7.9 for me, so it’s the previous version that prompted. I think we’re at the point now where we can get to 4.16 and 0.7.9 easily.

    Do it :-)

    I used to have the opinion that Ubuntu was unstable, but it really has improved in recent years. I have a box here running 16.04, and it's solid. A simple apt-get and zfs is up and running too ;) I wouldn't be [entirely] opposed to a change to Ubuntu, but I still think Debian is better :)

    So the ZFS Plugin was updatet today and erverything works fine with a fresh installed and updatet omv 4.1.7 with 4.16.0.bpo.1. Massive thanks to subzero79 and ryecoaaron for updating the plugin and helping a novice like me :thumbsup:

    Nice. :D

    I have just updated the plugin here too, and can confirm that I now see ZFS pools in the dropdown. Go to the ZFS tab first and accept the config changes, then to the shared folders - and they are there! Nice work @ryecoaaron and @subzero79 <3

    @Blabla Your fix is here :)

    1. Enable OMV-Extras Testing in OMV Extras tab
    2. apt update
    3. apt upgrade - during upgrade select 'N' to any config file changes (to keep yours)
    4. Visit ZFS tab and accept changes (yellow banner)

    You may need to export pools, then import here. You probably won't get a save config changes prompt if you need to (I am guessing).

    5. Disable OMV-Extras testing if desired.

    6. Go enjoy your pools in the dropdown :)

    I'm sure this kind of thing doesn't happen often, first time it's happened that I can remember in how ever many years of using Debian. There must be a Godzilla bug in the 4.14/5 kernels that they've discovered. The hold buttons will still be of great use once everyone is upgraded and settled on 4.16, then it's time to press hold and leave it alone unless you have good reason.

    I don't see a problem with having the 4.9 stable kernel by default, but think an option in the GUI to move to backports should be kept > Stable by default and move to backports if need-be.

    @votdev we've had a response from Tapatalk about 'get thread' error:

    "when the App tries to get topic from the forum, the forum's server throws back a 500 internal error.
    I'll send a note to the forum admin to see if anything has been done recently. I'll ask if they could send us any error log there may be associated with Tapatalk usage. PHP, web service and the application log. See if you could request the same. Thanks."

    Is it possible to get these logs for them?