Posts by godfuture

    The 9300-16i runs EXTREMELY hot! It's technically 2 packages (2 8-ports). I'd be weary for smaller builds. For a time I had it replaced with a 9305-16i which ran ~15c cooler.

    I could cancel my order. I had luck due to Chinese holidays. The cheapest 9305-16i incl. cables was 190€, but I have found 9500-16i for 243€. Lower power consumption 24/7 will pay off over years. Therefore I think the better option...also having the better product in the end.

    Many thanks for all the support. I finally figured out what stressed me since I started to selfhost. 8)

    The 9300-16i runs EXTREMELY hot! It's technically 2 packages (2 8-ports). I'd be weary for smaller builds. For a time I had it replaced with a 9305-16i which ran ~15c cooler.

    The board I am using only has one 16x PCIe, all other are just 1x. There is a long story behind it, but to make it short, two important points that led me into this: price and transcoding.

    I wanted to have an Intel CPU because their GPU has good transcoding. But I could not find a cheap board having multiple x16 or x8 slots.

    Thats why I need a card bundling a lot of ports into one entity.

    Alright, I will try to cancel my order and watch out for 9305-16i, even it will be much more expensive.

    Thanks for all your help. Really looking forward having my server without these ata kernel timeouts anymore.

    This is the post I should have received many years ago.

    My setup is very special. I have two SSDs for system in RAID1 (OMV is normally installed on one disk). I have 3x disks in RAID5 for Nextcloud and backups (I have written scripts to create a backup before upgrades), 5x disks in RAID5 for my media and 2x disks in RAID0 for everything that needs speed. So no, not all disks are in one array!

    The LSI card should be the right one. How to see if it is fake?

    Many thanks!

    Edit: channels, ports, IT mode, full duplex, port multiplier....this storage topic isnt easy at all!

    Maybe starting a new thread asking about the DeLOCK 8938 card would grab people's attention rather than tagging onto an old thread. But the message I take from the old thread is to just to ditch the DeLOCK and get yourself a server grade RAID card capable of running as a pure Host Bus Adapter (HBA) in IT mode. One card will support 8 drives.

    I've no idea how your hardware is setup. Whether you're using consumer or server grade parts, nor what kind of case and cooling you're using, nor how your 10 HDDS are combined.

    Among other things you can see in top is the kind of iowaits your system has. Things point to the DeLOCK 8938 card as a possible bottle neck and perhaps even a source of file errors.

    I was wasting so many days, weeks, maybe even months of searching the internet and trying out stuff. You took the right conclusion in few hours. Yes, you are right. This card messed up everything. I already ordered LSI Broadcom 9300 16i card.

    OK you've got your answer as to why the DeLOCK card is bad for your setup.

    JMB575 is a port multiplier chip and not suitable for RAID applications. Just google JMB575 + RAID.

    I didnt know port multiplier is a problem with RAID. I have seen a lot of complain about the kernel errors I experienced, because I was googling like hell. It seems you know a lot more than many admins out there.

    I am pretty late, but I have 100% same situation.

    10 HDDs, Openmediavault, DeLOCK 89384, all SMART ok, "retrying FLUSH 0xea Emask 0x4" under high load.

    Edit: I contacted DeLOCK, but they dont see any issues with the card. So it would be interesting to know the exact name of the DeLock JMicron chip being used.

    I also have a lot USB hdd issues. I bought a Seagate Deskdrive which sometimes worked at the USB 2.0 port. But very likely failed on USB 3.0.

    But lately also on USB 2.0 I see issues. Then I shucked the drive and bought a Inoteck USB 3.0 SATA adapter. Same problem here.

    My personal issue is a bit that I dont know how to monitor or debug all these situations. I just see EOF errors while copying files via my docker container "cloud commander". And I see io errors in dmesg.

    Summary is...all these USB drives perfectly work on my windows machine, but rarely to never work on my Intel board running openmediavault/Debian with kernel 6.1 (...and also 5.19).

    Edit: I stopped plugging the external drives into the server to move data. I am using a notebook wired to local network and transfer the data via shared folders. I get better speed, no interruptions and no data corruptions.

    I have changed /etc/cron.d/mdadm to "57 0 * 2,5,8,11 0". Lets see if that works :)

    Here is an example using "depends on"

    And here is a link to some documentation.

    This made my day. It was an Odyssey with ups and downs. In the beginning, I tried exactly this: postponing the docker startup waiting for my crypt disks, but without success. It went up no matter my depends or wait-for-paths and so on. Then I started individual systemd service units. This worked, but automatic restarts didnt work, only start on boot. Now with your hint, everything works. Everything is started automatically with correct bind mounts in correct order. Happy! :)

    Buuut, still I am scared why systemd unit got broken. Do you have an idea what might have happened?

    Just for the record: I am using fluentd as a docker container for other container. This was an additional dependency to be managed in docker. Therefore I moved from fluentd in docker to deb. My thought is: every basic functionality should come from system, logging is one of it. Since then, I have to manage less dependency in every container. Hope this helps others to find a good architecture.

    And most users don't understand why they are using raid.... I don't use raid on my personal NAS at home but I do maintain hundreds of systems using raid at work. What does that tell you?

    Good question. I would answer it for the following way:

    Important data always needs backup. RAID is not a backup, but kind of risk management. It helps with two issues:

    • Backups are never up to date. So there will be always a chance of files going lost no matter what.
    • Restore from backup costs time.

    In case a disk dies and no raid is applied, both of the problems above kick in.

    But there is a third problem. Cost. The user decides if the cost of data loss is bigger than the cost for backup. In the area of less important data, raid can be a way to balance costs:

    • For less important data, raid is cheaper than full backup.

    So in short, raid balances cost and time.

    My docker setup always has been a bit more complicated, because I have been using luks encrypted drives for some docker container. Unfortunately these luks devices do not get mounted fast enough during boot and docker container start with empty bind mounts.

    My solution was to create systemd service files and add dependency for mount path. This worked, till I have upgraded to omv6.

    Now my systemd service files exist, but cannot be used anymore:

    Failed to start docker-xyz.service: Unit docker-xyz.service not found.

    I have used "systemd-analyze verify", "systemctl daemon-reload" and and and....but the error above still happens.

    Does somehow know how to tackle this issue? What might be a start?


    I have multiple software raids active. Normally this works really well with OMV, and the software raid webUI is working as expected. But I noticed that everytime the check is going on (I believe also rebuild) the webUI is not responding and I get a "gateway error".

    Are there others with this issue? Or is it because of my devices used or general setup?

    Thankful for every hint in the right direction.