Beiträge von BernH

    Totally agreed. But all that taken into consideration, upgrading my current setup from connecting my Mac to the OMV’s NVME storage with a 2.5gbe to 10gbe will upgrade the speed considerably. Doing so with a 40gbs thunderbolt connection will enhance it even more, to a potencial 4x, real world speed to be seen…and without the cost of a new switch…and the NAS sits right beside my Mac…

    Not necessarily.


    As an example, we have a large 1/2 Petabyte SAN implementing 4 spinning disc chassis totaling 64 Exos HDD's with a 24 SSD drive cache tier on top of it for increased bandwidth. It has Linux, Windows, and Mac's clients connected.


    One of those clients is an M1 Ultra Mac Pro. It has a 100Gbps connection to our SAN via an Atto Fast Frame card. We were not seeing great speeds from the Atto card (around 1500MBps or 12,000Mbps), so as a test we swapped to an Atto thunderlink thunderbolt connected box with a 25Gbps connection and are getting 1200MBps or 9600Mbps. 1/4 the maximum connection bandwidth, but 3/4 of the speed reached on the 100Gbps connection.


    Linux clients with the same 100Gbps connection can get 2 to 3 times the speed and Windows clients with the same 100Gbps connection are about 3/4 the speed.


    This is all iscsi connection to minimize hardware overhead. None of it can saturate the 100Gbps connections. Linux might be able to saturate the 25Gbps connection, but nothing else can. At best it's more like about 40% of the connection bandwidth on other OS's.


    iperf tests show much higher "normal" numbers through the network switches, and the math says that the storage chassis should be able to keep up to the bandwidth, but in this case the bottleneck is in the way the OS's are handling the connection.


    There is more than just the bandwidth math to consider

    I´ll try to get a hand on such an add on board, if I do I'll report back. As for hardware, machines with thunderbolt connections are actually more prevalent then with 10gbe I think...most modern laptops are coming with it...

    You also have to keep in mind that just because there is a thunderbolt connection, it doesn't mean a system can use all of that bandwidth. A 40Gbps thunderbolt converted to Mbps to keep it on par with storage is 40000Mbps as a theoretical maximum.


    If the workload is on one drive, a spinning disk it will probably be a maximum speed of 1200Mbps, a regular SSD around 4000Mbps, an NVMe probably around 28000Mbps (all theoretical maximums). From that theoretical maximum you have to subtract all hardware and protocol overheads, latencies and bottlenecks, not to mention any limits imposed by the protocols.


    If your hardware and protocols can use the full bandwidth, then you have to start looking at some kind of RAID storage to distribute the workload across enough drives to saturate that connection, preferably using a hardware RAID controller to look after any check summing for redundancy.


    The point is, it doesn't matter how fast your connection is if there is something in the mix that can't keep up. It will only be as fast as the slowest link in the chain.

    Puid and pgid are the user and group id’s of the user you want the container to run as. Root is 0:0, the first normal user is 1000:100 (100 being the users group) with each additional user incrementing the 1000 by 1.


    Personally, for admin folder access (which I agree can sometimes be very handy to be able to use a file browser), I use webmin, but only for the file manager. It can be used for server config on a normal Linux install, but it can also break omv as there are things that omv takes control of and if you change those configs outside of the omv interface you can break it.


    With that in mind, I do stress that you need to be very carful if you do want to try webmin.

    Might be an issue with the debian arm image, and nothing to do with OMV. Only thing I could think of to eliminate that would to be to do a fresh install of debian and then install docker-ce and compose, then try to run the containers like that. If they work its an OMV/OMV-extras issue, if they don't it's an debian arm build issue. At least then you know who to chase.

    I don't know what else to suggest. Is this the same hardware you ran previous versions on or is it something new? The ODROID HC4 NAS is an ARM chip from what I see.


    I have no experience troubleshooting anything ARM based, but I know there are 32-bit and 64-bit versions of it, requiring the correct os version to be installed. If yours is 32-bit, you may be running into some of the issues that are starting to arise with the death of 32-bit softwares.

    I don't see anything in there that would raise a flag, but I have never had to try to diagnose a docker network access problem, so have never had to analyze a config, I'm not sure what would be normal.


    I do notice you are using the network automatically created by a container and not using a named network. While this should not change access to the container, it will keep other containers from addressing it by name since they don't share a network and cant resolve each others container names. Just an fyi in case you have containers that need that capability.


    Do you by chance have the fail2ban plugin installed and is perhaps is causing iptable blocks or perhaps have firewall rules enabled that could be also causing iptable blocks to the docker network or non host network containers in general?

    My bad for not stating it but I have done that as well. I have not tried to access 172 across the lan, only from the host machine itself (via ssh).

    /omv shows up fine in the browser. as did omv:32400 for plex. I've just been unable to access any container in a bridge network since I upgraded to bookworm/omv7 the other day. I've been using omv since late in version 5 with plex and qbittorrent. No means an expert but I've never run into this issue on omv5, upgrading and using omv6... but now I can't seem to get around this.

    I have been using OMV since V2 or V3, and have never had issues bringing containers forward, coming from the old docker plugin on through the portainer route and now compose. Ultimately under the hood it all still works the same.


    Are you sure the container is running? Check the logs for the container. Perhaps it is actually not started up completely due to an error. No ability to ping it from the host makes me think you have an error and the container is not fully initialized.

    That looks exactly what I need. Here you are also able to clone the entire device, which isn't possible inside the OMV UI because you can only choose folders from the source drive. By means of a 'scheduled job', I believe you're referring to the feature in OMV - I will check that out.


    This setup might just be exactly what I need.


    Thanks!

    Yes I mean the scheduled jobs in OMV. If you just uncheck the enable check box it will not run on a schedule but you can still manually run it.


    Or you can just use an ssh connection to run it at the cli.

    The 172 ip address is a docker network, it is not accessible from your LAN and is only used for internal docker access, instead if trying to access the container with omv:8080, replace the omv with the LAN ip address of omv. If omv is not being resloved to an ip address on your LAN, you have to use the ip address in a browser, not omv. The exposed port 8080 will direct access to the container assuming you don't have access blocked by something else.


    Also for clarification, the reason you could access plex, is because the network mode in your compose files is host. That make the container use the host ip address, and not a docker ip address that relies on an exposed port for access.


    Not trying to be nasty, but you need to learn how that stuff works/behaves if you want to be successful with docker.

    First thing's first. You need to allow r/w permissions for the omv user(s) on the shared folder you are using for samba. This is in the shared folders section of OMV


    You also need to log in to the samba server as that user from the other systems.


    Additionally, if required, you also can force the samba server to create all files as a specific user by adding something like the following to your share's extra options, adjusting the masks, modes and user/group as required (the listing below is for my nextcloud directories so samba access creates the files with nextcloud friendly ownership and permissions)


    Code
    create mask = 6775
    directory mask = 6775
    force create mode = 6775
    force directory mode = 6775
    force user = www-data
    force group = www-data

    As you have discoverd, the whole shared folder thing is just used as a "friendly" way to handle things in the OMV UI. If you want to go below that and replicate actual full disk structures, you need to do this task via CLI, SSH, or a scheduled job (either direct command or running a script)


    As an example, I have the following rsync command set as a scheduled job to clone one drive to another as a backup, but it is set to exclude the recycle bin that the samba plugin creates, but will also delete any files on the destination that don't exist on the source. However I do not have this enabled so that it will not run automatically. I have to manually find it and run it in the scheduled jobs list.


    rsync -ahv --del --delete-excluded --exclude="/mirror/.recycle" /srv/dev-disk-by-uuid-0e13d684-5e3e-41e0-a4fa-d1be722413dd/ /srv/dev-disk-by-uuid-4a9c2125-12d1-4b4a-8275-86284cbcb690 

    With Soma'a help identifying the board as CWWK and looking at the board pictures, does this look like your unit?


    CWWK 12th generation N series 8-core new member Affordable version N305//N200/N100/fanless low power consumption micro mini industrial control host soft routing
    Product Information Product Features 1. Brand new lntel 12th generation N series full small core low power consumption processor 2. 1*SO-DIMM DDR5 memory…
    cwwk.net


    If it is, the rudimentary specs say:


    8. The M.2 x4 interface supports the expansion of 2/3/4 M.2 NVMe x1 adapter boards


    Which makes me think the Lexar NM790 nvme's, being x4 drives, may be able to operate at x2, but not x1, and with the adaptor card being in an x4 slot, the 4 available pcie lanes are given to the first 2 nvme's and the other 2 are starved.

    Additional note


    If you can run dmidecode (piped to either less or more for ease of seeing the output) via ssh or a console, it may list the motherboard info towards the top. This may assist in finding the info online

    Those pictures do help understand the connections for sure. As I see it, it looks like the card is only using the m.2 adaptor/header card you have noted as slot A on the left.


    This once again makes me think that it may be a situation where populating the m.2 slot in the middle of the board may be disabling 2 of the slots on the card from operating correctly. By this I mean that the system may be a "one or the other" kind of configuration. Either 2 nvme's on the mother board or 4 on the card, but not both. They may still be seen in the BIOS because a chip on the card id reporting it, but not have the required PCIe lanes to operate correctly.


    Trying a different OS (other linux or windows) may help determine if the card is defective, but a quicker/simpler test may be to remove the nvme from the middle slot on the motherboard and place it on the card. That could answer the question of the pcie lane starvation "one or the other" I just mentioned, and determine if the card is functioning. Linux should still boot from it as it doesn't really care what slot a boot drive is in, as long as there is a boot loader and OS on it.


    Once again, without knowing the pcie lane distributions when using the add-on card this is just a guess.

    All the questions you mentioned I also asked myself. I just wonder, why is it also not working when I install just one NVMe SSD on the board on m.2 slot 3 or slot 4.?

    Your diagram does help to understand the situation a bit, as it is difficult to picture the situation based on the written information above. Unfortunately, it doesn't help provide an answer. The fact that you seem to be able to get 2 slots on the card to work, but not the other 2, makes me think you either have pcie lane starvation as I explained above or a defective card. If it was a driver issue I wouldn't expect the card to operate at all, but it is still worth looking into as there may be something unique to that card that requires a "special" driver version.


    I think you really need that specification information/motherboard and card manual to try to understand what is happening.

    Hi BernH ,


    thank you for your feedback. The PCIe x4 "Card" is part of the computer, this means it was alrady installed, when I bought the PC. How do I get information which driver I may need? How do I see how much PCIe lanes are occupied? (But, if there is a too low number of PCIe langes, why can I see all four NVMe SSDs in the AMI BIOS?)

    Regards,
    Mic.


    Since I have no detailed information available on your device, I can't give specifics, so you may have to look at the specifications on the hardware and/or contact the manufacturer if you can't find the information as there is no way that I am aware of to tell otherwise what the motherboard architecture is doing.


    Basically, the concept is this:


    A CPU offers up a certain number of PCIE lanes for the motherboard, storage and expansion cards to use, but they are dedicated to each device.

    As an example, if the CPU offers 16 PCIE lanes, you could use them all for a CPU operating at full speed, or give 8 to a GPU and 8 to be used for 2 NVMe drives, or not use a GPU, and allow them all to be used for storage or some other pcie card, but once they are committed to a device they can't be used for something else.


    With that in mind, you can look up the info on the CPU to determine the number of pcie lanes it offers, and then look in the manual for the system/motherboard and/or expansion cards for any information regarding pcie lane usage by devices. Sometimes you will find information in there regarding the required pcie lane usage for a device and it's impact on other devices.


    It is not uncommon when looking at this information to see references to storage stating things like: if you use an nvme drive in a certain slot it will disable some other slot or combination of slots. This is because the nvme requires a certain number of pcie lanes, and if there is no a pcie controller chip providing more pcie lanes than the CPU offers, it will take what it needs for from the CPU, thereby reducing what is available to other devices.


    In your case, as an example, based on the intel N100 CPU specs, there are 9 pcie lanes provided by the CPU. Assuming there is no additional pcie controller chip to look after other things, and you have 2 nvme drives in the motherboard, with each drive requiring 4 lanes, you now have 8 used and 1 available (usually motherboard nvme slots are x4 speeds, but once again without motherboard information this is just based on common usages) . If the add-on card requires x4 as you mentioned above, it will need 4 of those 9 pcie lanes, but 8 of them are in use by the on board nvme drives, leaving only one lane available to the card, which may be enough for the card to report things in the BIOS, but not enough for the card to operate properly.


    With all of that in mind, as I said, you need to first look for the information on the hardware to see if you are exceeding the capabilities of the hardware pcie lanes as explained above.


    If you are not, then you will have to determine the linux compatibility and/or driver requirements. Most standard linux compatible devices have drivers built into the kernel, but sometimes additional drivers are required, and once again, this is not something I can answer without information on the hardware chipsets and some googling to find the information, of possibly even having querying the manufacturer if general searching does not reveal the information

    First thing I would check is the linux compatibility of that card if the bios see it right but debian does not. Perhaps there is a driver that you need to install.


    You should also check to see if there are any issues with pcie lanes available on that slot. (ie. does the card need 8, but there are only 4 on that slot)