Posts by no_Legend

    Hi ist just figured it out.

    The cache was not attached tithe storage drives.

    I didn’t use the call to create the cache and the cached hdds together in one call.

    So I needed to attach the cache to the cache file manually.

    After that it looks like the cache is working.

    I will make some speedtest again.

    Cheers Robert


    Is anybody in here working with bcache?

    I used this guide here, sorry only german

    I‘m want check if it was successful what is as done.

    But when I check the state of one of the 4 drives which should be attached to the cache, I can only find the State „detached“

    Does anybody know how to check or what I did wrong?

    Cheers Robert

    HannesJo the write hole is not a big problem for me.

    In the past I used mdadm without any bigger problem. Also I don't have a ups running.

    But also at the btrfs is raid5 also marked as unstable. That is the reason why I don't want to use it.

    As far as I read in the internet ist the lvm raid not a good performing solution.

    And also there are some information that under lvm raid is also an mdadm working.

    I found some benchmarking from a German guy but the most problem here is that you don't know when the test was done.

    So with some further code changes it can be that the performance is not problem anymore.

    How knows?

    I also found some solutions where the guys use first the mdadm raid and then they used lvm.

    But there is a also a raid ssd for the caching used.

    At the moment I think I will not use the ssd cache option. I will start with an mdadm raid5.

    But any suggestion for the filesystem? Ext4 or anything else?

    Cheers Robert

    Hi Together,

    for me 4 new hdd (4tb wd red plus), want to use a raid 5 system.

    Additional it got an 250gb ssd in the system which I don't use for anything. So my idea was to use this ssd as cache for the raid 5.

    Before I used madam for the raid. But If I read it right in the internet, mdadm is not supporting this feature.

    So far I found out that LVM ist support the ssd cache with dm-cache.

    Did anyone already use this setup in production and give me some advice?

    Btrfs is not an option because there is no stable support for RAID5.

    Cheers Robert

    I'm not able to install OMV-extras for 6.0 alpa.

    After installation always by clicking in the webgui to the omv-extras, there is coming a error message. "Software Failure"

    Any advice for this?

    Edit: Need to "reset the ui to defaults". Now it is working

    No, the kernel names the devices based on a ruleset. And because of this behaviour predictable device names HAVE to be used for filesystems (which is done by OMV), because you can never be sure that /dev/sda1 will not become /dev/sdb1 when another disk is detected.

    Hi, thanks a lot for the explanation

    Okay habs verstanden, Dank dir.

    Cheers Robert

    All true.

    I just wanted to share a success story. In my case OMV 6 works just as stable as OMV 5.

    I'm just going to set up my my system completely from scratch.

    So I'm now not sure if it should give OMV 6.0 a try.

    But how it will be if there will updates from OMV6.0 alpha to beta and further to final release.
    Is there update possible or could it also the way that I need to install the hole system from scratch again?

    Edit: quick installed it in vm an found a few problems:

    1. No login possible after typing in the password and hit Enter. Enter just make the password readable. You need to use the mouse to push the login button. Tested with firefox

    2. Update are not possible to install by webgui. so the packed openmediavault 6.0-16 need to be installed by hand.

    3. there are updates shown wich are on not visible on the console. Releoad and search for update on the webgui is not cleaning the update list.

    Is there any official way to report all this found errors.

    Cheers Robert

    Hi togehter,

    can anybody explain me who the naming is done for the hdd/ssds?

    I got some wd drives and some toshiba drives.

    My os ssd (WD) is connected to Sata port 1 and if there is only the wd is connected it is getting /dev/sda

    But if there is a toshiba connected the WD ist getting /dev/sdb and the toshiba sda.

    I was thinking that this naming is done by the Sata port numbering and not according the device manufacturer.

    Cheers Robert

    Btw, I love your signature: "OMV 3.0.XX Erasmus always up to date."... *LOL*

    Oh, you are right already running OMV 5.x.

    But you can't see your own signature under my posts.

    Need to change it quick.

    Back to topic.

    Thanks for the instruction.

    I will try it this way.

    Cheers Robert

    Hi Togeteher,

    in my build there is an old raid 5 with mdadmin.

    Now I want to replace this hdd, there are running now for 12 years without any problem, with some new drives.

    I already made a backup of all datas on an external drive.

    But here is what I'm thinking about, does I need to remove the raid first in the webgui of my OMV before shutting down the system and replace all drives?

    Or who is the best procedure to do this?

    Cheers Robert

    So I just checked the cable.

    And looks like the cable was the problem.

    I shaked it a little bit and the errors war disappeared.

    So I decided to change the cable directly.

    Cheers Robert


    I got a lot if rx errors but I don't know where they coming from.

    In my home server (Dell t20) I use a bond of two nics with link aggregation.

    It looks like the internal Nic is casing the problem.

    And the count is getting higher and higher.

    Did anybody got an idea where the problem is coming from?

    For further details of the system see the attached report.

    Cheers Robert