Posts by pudding

    If you're running 1Gb/s Ethernet, you're never going to see the I/O that's theoretically possible with RAID5 even with drives running at 3Gb/s. The "network" (1Gb/s) is a hard bottleneck .

    Thats why I have a 10 Gb/s NIC insalled and a direct connection to the 10Gb/s NIC in my PC - alongside my normal 1Gb/s connection to the NAS.

    So It should be possible to see the maximum RAID5 perfomance - at least I hoped so. But turned out, I am stuck with 240 MB/s - even now that I have a 4 drive RAID 5 (was 3 drive before) :/

    _______________________________________________________________

    Your drive, which is a 3TB SATA drive capable of 6 Gb/s is running at 3/GBs.
    If I was to hazard a guess at the reason why, maybe you might have an older motherboard..??

    Ok, 3 Gbs would be great. And yes, its an older mainboard.

    But I was confused, as UDMA 133 is 133 MBs, whereas 3 Gbs would be 375 MBs.

    Additionally, my RAID 5 (3 drives) does not show the performance I was expecting. Its about 260 MBs - I would expect that from a UDMA 133 connection.

    As my drives can transfer (read) up to 150/160 MBs (via SATA II), I would expect RAID 5 transferrates of arround 300 MBs.

    Good evening,


    I am a bit confused by the output of smartctl and the messages log.

    sudo smartctl -a /dev/sdd states the following link speed for my hard drive:

    Quote

    ATA Version is: ATA8-ACS T13/1699-D revision 4

    SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)

    while the messages log for the same drive says

    Quote


    ata4.01: ATA-8: ST3000VX000-1CU166, CV23, max UDMA/133

    Whats correct?:huh:

    Try this:


    sudo rm /etc/systemd/network/99-default.link

    sudo netplan apply

    sudo omv-salt deploy run systemd-networkd

    Sorry, but that does not work for me.

    First line gives me

    Quote

    rm: das Entfernen von '/etc/systemd/network/99-default.link' ist nicht möglich: Datei oder Verzeichnis nicht gefunden

    Second outcome:

    Quote

    /etc/netplan/20-openmediavault-enp2s0.yaml:5:20: Error in network definition: Invalid MAC address '', must be XX:XX:XX:XX:XX:XX

    macaddress:

    ^

    Any ideas?

    Ich hatte gestern mal von Herunterfahren auf Ruhezustand gestellt, aber er ist trotzdem nicht schlafen gegangen.


    Obwohl "pm-hibernate" und "pm-suspend-hybrid" per SSH beide den Rechner schlafen legen. Jetzt bin ich verwirrt, warum funktionierts per CLI und per Autoshutdownscript nicht?

    Das Script scheint insg. nicht ganz sauber out-of-the-box zu funktionieren. Bei mir hats den PC auch nicht runtergehfaren, dafür aber die Netzwerkkarte deaktiviert; die ließ sich auch nach einem reboot nur mit ip und dhclient wieder aktivieren. Und beim reboot hab ich beim Bootvorgang ca 30 Zeilen"start autoshutdown" "stop autoshutdown" "start autoshutdown" usw. gehabt.

    Hello.

    Having OMV installed on a workstation and using it normally through my Router on a 1Gbs LAN , is there possible to place a PCIe 10Gb NIC both on server / my PC and to connect them directly to be able to write/read at 10Gbs between server and my PC ? - reason for this would be to avoid buying an expensive 10Gb switch .


    (meanwhile the connection to internet should stay as it is on the 1Gbs cards/local network of Server & PC ).

    *push*

    Same issue here. Any solutions? The how-to's I found just explain how to bond NICs or how to have two unbonded NICs in the same network via switch etc.


    UPDATE

    Ok, seems to be simple, if you know where Windows is blocking the second network.

    1. assign a unique IP to your second OMV NIC via ip addr add xxx.xxx.xxx.2/NETMASK dev DEVICE. If the NIC is down, activate it via ip link set DEVICE up.
    2. assign a unique IP to your second Windows NIC, which is +1 to the OMV address, so xxx.xxx.xxx.3. Netmask must be the same as the NETMASK for OMV. It now is crucial to set the gateway of your second Windows NIC to the unique IP of you OMV NIC; otherwise Windows is blocking the second network.
    3. Connect your SAMBA/CIFS share using the IP-address of the second OMV NIC instead of the OMV machine name.

    Done.