Beiträge von mrpg

    I have not moved cables between controllers.


    md126 is connected to my gigabyte mainboard sata controller

    md127 is connected to 2:00.0 SATA controller: ASMedia Technology Inc. Device 0625 (rev 01) (prog-if 01 [AHCI 1.0])


    which is a pcie plugin controller.


    but i have replaced cables with brand new ones, and the ata device errors remain on the disks that have new cables.


    I guess your suggestion would be best, move the md127 array to the onboard controller and test.


    Br

    Patric

    Dear Sir/Madam,


    My OMV nas has worked really well since i upgraded my hardware.


    Now one of the raid systems is having some kind of issue : /dev/md127 which consists of four 3tb Toshiba disks.


    any file activity is slow & unresponsive on the filesystem, and if for example a copy a file from a folder on the md127 filesystem to another folder on md127.


    i get these kind of errors in my /var/log/messages :


    Which is all 4 drives in md127.


    i have swapped out some of the sata cables, but the same errors on the disks that i have swapped data cables on keep appearing.


    i have another raid md126 , attached to the same sata controller, that i do not have any issues with.


    any help on this is very appreciated!.


    Br
    Patric

    Dear All,


    my omv 4 is working wonderfully, except i cannot upgrade kernel without loosing my nic driver, ( Intel i219) ) this driver is included in the 5.05 build, so that is one reason i think an upgrade would be a good idea.


    The second reason i would like to upgrade is because cockpit does not seem to play well on omv 4 / debian 9.10


    i cannot create new vm:s through the cockpit interface, and remote console also does not work.


    i logged an issue with cockpit that thinks its a kernel bug that was never fixed : https://github.com/cockpit-project/cockpit/issues/12886


    anyone using cockpit on v 5.05 ? , does remote console & creating vm:s work through the web interface of cockpit for you?


    Br
    Patric

    Thanks for the info in this thread, i had the same issue with two disks, one of the disks were brand new, the other disk was from an exsi installation.


    It created 5 partitions when i was creating a raid 1 setup on two disks.


    so i did a secure wipe of 40gb on both disks, and then a quick wipe on both.


    now when i create a new raid, it only does one lun without partitions.


    Br
    Patric

    Yes

    ok , this works, but i struggle to make sense of it..


    i have all my files in /srv/dev-disk-by-label-ssdraid5/


    but i now also have the same files in //srv/dev-disk-by-label-ssdraid5/zfs


    although the /srv/dev-disk-by-label-ssdraid5/ and /srv/dev-disk-by-label-ssdraid5/zfs is the same size its not occupying double the size


    how does this bind mount on export work?


    i cannot move files from /srv/dev-disk-by-label-ssdraid5/ to /srv/dev-disk-by-label-ssdraid5/zfs , it says its the same files.. its like a symlink but not quite?


    Br
    Patric

    Hi,


    Sure if i place files in /etc/zfs , i can see them on a remote system when a mount the export.


    but what do i do in order for the mounted filesystem to appear in my export?


    i must be doing something wrong, and i dont quite understabd how to fix it.


    i.e i would like /srv/dev-disk-by-label-ssdraid5/ to be exported.


    Br
    Patric


    or wait ok, i now understand what you mean, i.e i need to move all my files to the zfs directory under /srv/dev-disk-by-label-ssdraid5/ ?


    Br
    Patric

    Hi,


    Just to check if i have done something strange.


    just now , from the omv we gui :


    1 removed the nfs export
    2 removed the shared folder
    3 unmounted the filesystem
    4 mounted the filesystem
    5 created the shared folder
    6 created the nfs export


    i can still see the files in /srv/dev-disk-by-label-ssdraid5


    but not in /export/zfs


    rebooted


    same issue


    and yes i am always logged in as root (not even using sudo)


    EDIT :
    if i manually run :


    Code
    mount -o bind /srv/dev-disk-by-label-ssdraid5/ /export/zfs

    i can see the files under /export/zfs



    Br
    Patric

    Hi,


    I sat and configure my omv yesterday for probably an hour, its no longer crashing :) , so we can ignore that issue!


    However, i added another 5x disks and setup as raid 5, created a shared folder, and created an nfs share.


    however, the bind mount does not seem to work?





    md126 is the new mdraid



    its mounted in /srv/dev-disk-by-label-ssdraid5


    and i can see my files :



    Code
    root@pgnas:~# ls /srv/dev-disk-by-label-ssdraid5/
    utils  stuff  trash


    in /etc/exports :




    Code
    root@pgnas:~# cat /etc/exports 
    # This configuration file is auto-generated.
    # WARNING: Do not edit this file, your changes will be lost.
    #
    # /etc/exports: the access control list for filesystems which may be exported
    #               to NFS clients.  See exports(5).
    /export/pgnas 192.168.0.0/24(fsid=1,rw,subtree_check,insecure)
    /export/zfs 192.168.0.0/24(fsid=2,rw,subtree_check,insecure)
    # NFSv4 - pseudo filesystem root
    /export 192.168.0.0/24(ro,fsid=0,root_squash,no_subtree_check,hide)

    my fstab :




    i can see that it should bind mount to /export/zfs


    but if i do an ls /export/zfs i cannot see any files


    i have manually unmounted & mounted the /export/zfs directory, but its still blank.


    can anyone help with this?


    Br
    Patric

    Hi,


    So grateful for all the help so far! , you were right @ryecoaaron , my md127 raid 5, works perfectly in my new nas pc!


    But the webgui / engined seems to crash or stop functioning , i can access the web gui for a few minutes, and then it stops responfing, i have to :


    Code
    systemctl restart openmediavault-engined

    to get it going again, it seems to happen really frequently.


    please let me know of any info you want me to provide in order to help me fix this.


    Many thanks in advanced.


    Best Regards
    Patric

    hello,


    ok i am back again :)


    i have finally had time to install debian 9.9, then install the intel e1000e driver, and then omv 4


    when i first installed, i could access the web gui, i saw that my md127 raid was resyncing, so i left it to resync over night.


    this morning, i get to the login page of the web gui, i enter my username & password, and i am back at the "loading please wait screen"


    i can login through ssh without ptoblems, i i have zero load on cpu & zero activity on my disks.


    how do i progress from here?


    Please help.


    Best Regards
    Patric