Beiträge von gtrdriver

    I'm using a Gen8 Modell with 4 10gb drives and a esata card with a sata port multiplexer sata case which has additional 4x8tb drives


    Works like a charm...


    Also installed kvm and cockpit and pfsense as firewall for the whole net


    Also very good!

    Hallo zusammen


    ich habe an dem OMV per USB ne schneider APC UPS hängen.


    Dazu das Plugin installiert - funktioniert auch grundsätzlich:



    Was mir an Werten aber komplett fehlt sind Load und Voltage - das würde ich gerne visualisieren.


    Kann mir jemand einen Tipp geben wie ich hier vorgehen kann ?

    Hello


    here i have a working OMV Installation - everything is fine.


    I want to add a external Case with ESATA SATA Multiplexer Feature - so i have buyed a marwell ESATA PCI card wich is multiplexer able (88SE9230)

    The Card is shown at bios i see the attached Drives at Bootup - but OMV dont dhow the Drives


    I found some information about this issue on 88SE9230 at the NET and some Workarounds - but noting helps


    The Controller itselve is shown at lspci:


    Code
    07:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)


    But no one of the Drives attached at the Controller is shown in /dev/sd*


    When you search on Google you find some information about this issue - there are some solutions regarding a newer kernel


    So i installed omv extras and activated 5.9.0.0 kernel but same issue


    does anyone have a idea or a Workaround for this issue ?

    Hi


    i have a HP Microserver G8 here - wanted to use it as simple NFS NAS System (only NFS is needed).


    As Harddrives there are 4x 10TB for a Raid6 (RaidZ2)

    With ZFS The Transfer Speed (even from a local connected USB3 HDD is extrem Slow - about 20-35mb/s

    Using Raid (MDraid) i get arround 50-65mb/s


    I also tryed Freenas (truenas) wich is ZFS out of Default) - same Hardware i get arround 70-85mb/S writing Speed.

    But i dont like FreeBSD - i also had issuees regarding premissions on NFS


    Is there any way to speed UP ZFS on OMV ?

    Hi


    im not new on linux software Raid - so my point of View is that after raid creation the Raid has do do a first resync wich takes some time ...


    In my new Setup raid6 with 4 10TB Drives

    Code
    md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]
          19532609536 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
          [>....................]  resync =  0.2% (28918656/9766304768) finish=14959.6min speed=10848K/sec
          bitmap: 73/73 pages [292KB], 65536KB chunk

    This assume to be arround 4 days ...


    Im woder if i look at NAS SYSTEMS wich also working with Linux System and mdraid linke Synology Terramaster and so on ...

    The first raid Init assume to take 5 minutes ...


    Do they use other method or something else wich makes first init in a other way ?


    Best regards

    Hello


    here i have a Install on a RPI4 (fresh install made with the simple Installer Script from Github)
    Everything works fine


    But if i want to activate Notifications i get a Error message:



    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run mdadm 2>&1' with exit code '1': /usr/lib/python3/dist-packages/salt/utils/path.py:265: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working if not isinstance(exes, collections.Iterable): raspberrypi: ---------- ID: remove_cron_daily_mdadm Function: file.absent Name: /etc/cron.daily/mdadm Result: True Comment: File /etc/cron.daily/mdadm is not present Started: 11:12:13.559648 Duration: 0.895 ms Changes: ---------- ID: configure_default_mdadm Function: file.managed Name: /etc/default/mdadm Result: True Comment: File /etc/default/mdadm is in the correct state Started: 11:12:13.560798 Duration: 91.847 ms Changes: ---------- ID: configure_mdadm_conf Function: file.managed Name: /etc/mdadm/mdadm.conf Result: True Comment: File /etc/mdadm/mdadm.conf is in the correct state Started: 11:12:13.652946 Duration: 19.29 ms Changes: ---------- ID: mdadm_save_config Function: cmd.run Name: mdadm --detail --scan >> /etc/mdadm/mdadm.conf Result: True Comment: Command "mdadm --detail --scan >> /etc/mdadm/mdadm.conf" run Started: 11:12:13.673847 Duration: 17.118 ms Changes: ---------- pid: 1712 retcode: 0 stderr: stdout: ---------- ID: mdadm_update_initramfs Function: cmd.run Name: update-initramfs -u Result: False Comment: Command "update-initramfs -u" run Started: 11:12:13.691489 Duration: 12.163 ms Changes: ---------- pid: 1714 retcode: 127 stderr: /bin/sh: 1: update-initramfs: not found stdout: Summary for raspberrypi ------------ Succeeded: 4 (changed=2) Failed: 1 ------------ Total states run: 5 Total run time: 141.313 ms


    and:

    Does anyone have a idea whats going wrong here ?


    The Base is a Debian Raspberry Buster Install (also fresh)