Beiträge von moreje

    Please Help,

    I regularly have issues with my RAID 6 configuration.

    SATA link of sevral HDD are reseted, and HDD ejected of the array.

    here is an example of dmesg when it occurs:


    at this stage SMART info on the ejected disk is : Unknown

    if I stop the PC, re plug the HDD, it is again detected, and if the array has no events... it goes back to the array....


    it seems that this occurs most often with my TOSHIBA P300 4T HDD, wich are 2 and 8 months old

    no SMART error detected...

    everything is new in my machine (MB, CPU, RAM, etc...)


    what should I do?

    thank for your help

    Hello,

    when upgrading or appplying changes to OMV, I have the following warning and timeout...

    any ideas on how to fix that?

    Thank you Krisbee,

    You say Ubuntu will need SMB1(NT1) and after that it will make SMB3_11 connection, so I'm confused.

    I can't see in your link any mention of the NT1 protocol, and I'm surprised that the most recent LTS release still use such deprecated protocol? do you know where I can find documentation on this NT1 use ? is it only by File Manager?

    I did a complete reinstall of my Ubuntu and OMV still not showing up in "other locations" area.

    I will check if ubuntu has other settings that could help, but I'm surprised that it does not work out of the box from linux client to linux server :(

    My question was not wether to choose between Samba or NFS ;)

    Of course I can connect using my OMV server's IP, but it is a workaround of a function that should work, i.e , browsing network shares from file manager

    Maybe it is a network related issue since I've noticed that using server's name does not work : smb://DTC-JEJE or smb://DTC-JEJE.local

    my windows clients can see the server with its name, so it is known on the network...only Ubuntu can't see it

    Hello,

    since I upgraded to OMV6, my samba shares are fully visible from my Windows clients, but not from my Ubuntu client.

    My OMV server is not seen in File manager's network neighbouring

    My Ubuntu is running 22.04, so maybe it is related to this version, but I can't find how to troubleshoot this.

    of course , if I try to connect using the server's IP, it's OK, but it is not the expected behavior...

    Thank you

    Hi all,

    after complete installation of OMV6 on my new PC (AMD RYZEN 5 + GPU , 16 Go RAM, etc...), I wanted to install and configure my NVIDIA P400 GPU card in order to manage hardware video transcoding.

    So I installed the Nvidia prop drivers and here is my problem:

    at boot, the screen freezes à loading ram disk message and then nothing more

    The PC is alive since I can login with SSH and OMV is running

    But for maintenance purpose, I 'd like to keep this local screen available.

    Any ideas on how to fix that?

    thank you

    Not possible. Take good notes and maybe even screenshots of all your current settings and use them for guidance when recreating your configuration from scratch in the GUI. You can also keep a copy of the current /etc/openmediavault/config.xml file and look thru it for hints but the file is not importable.

    Hello, me again....

    another migration option but now of OMV6 to another computer, new SSD, etc....

    Is it safe to move the /etc/openmediavault/config.xml from the old computer to the new one in order to keep my settings etc.... ?

    EDIT:

    as a last chance, I tried to boot on SystemRescue image provided by OMV

    mdadm --run /dev/md126 (id has changed)

    mdadm --readwrite /dev/md126

    and this is what I got:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md126 : active raid5 sdc[1] sdd[6] sdb[7] sde[5] sdf[4]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [==>..................]  reshape = 11.0% (432049616/3906888704) finish=1920.6min speed=30153K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    md127 : active (auto-read-only) raid1 sdg[2] sdh[0]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 0/30 pages [0KB], 65536KB chunk

    10 x faster !

    reshape finished in 1 day, which is what I expected


    so conclusion, I have a problem with my OMV6 install, seems not to be hardware related....

    any ideas?

    thank you, if I am not wrong, all the HDD of my array are SMR, there is no mix.

    even if perfs would be lower with this tech of HDD, here I have much much lower!


    don't say to me that my only option is to wait for 10 days that the reshape terminates! hoping that this new array will have normal read/write perfs

    I 'm not really happy to have my NAS unusable for so long :(

    Ok, which is the new drive from above

    /dev/sdb


    PS: I started a backup of important files from md127 to md126.... while reshaping was on, perfs were about 20Mo/hour :(

    so I put the reshape into frozen mode => now I can copy at 30 Mo/s....

    here are the outputs:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md126 : active raid1 sdh[0] sdg[2]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 9/30 pages [36KB], 65536KB chunk
    
    md127 : active raid5 sdc[1] sdf[4] sdd[6] sde[5] sdb[7]
          11720666112 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
          [=>...................]  reshape =  6.6% (261621248/3906888704) finish=16995.8min speed=3574K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk


    OK, here is the actual setup: 6 MB ports , 2 PCI cards

    MB SATA 1 => md127

    MB SATA 2 => md127

    MB SATA 3 => md127

    MB SATA 4 => md 127

    MB SATA 5 => empty

    MB SATA 6 => md 127

    PCI1 SATA 1 => md126

    PCI1 SATA 2 =>md126

    PCI2 SATA1 => OMV6


    PS: booting from my SATA on PCI cards has never been an issue for me.

    Now I will try to put the SSD OS on MB Sata....


    EDIT after reorganisation:

    MB SATA 1 => md127

    MB SATA 2 => md127

    MB SATA 3 => md127

    MB SATA 4 => md 127

    MB SATA 5 => md 127

    MB SATA 6 => OMV6 SSD

    PCI1 SATA 1 => md126

    PCI1 SATA 2 =>md126


    I've removed the 2nd PCI card

    now OS & RAID md127 are on Motherboard SATA ports

    my PCI card hosts md126 HDD which runs fine.

    after reboot, I needed to get the array online and read/write.

    ==> same perfs : around 3500 K/sec :(

    thank you for your feedback.. let me do a short history of my setup

    - previous functionnal setup was:

    2X SSD in RAID1 for OMV5 OS

    4X 4T HDD in RAID5: vol 1

    2 x 4T HDD in RAID1: vol 2


    - then i switched to OMV6 on a new SSD / unplugged the previous SSDs. At this stage, perfs were OK


    - then I tried to grow my RAID5 vol1 with a new 4T HDD

    the problems came here!

    => concerning the new HDD:

    tested: switch PCI card SATA port to MB SATA port : same results

    tested: cable replacement: same results


    my next test

    OS SSD is connected to PCI card SATA => I will try to switch to MB SATA and stop using my PCI card


    other question:

    is this iostat output informative?

    sda, wich is the new hard drive has very poor read stats. is it normal? perhaps yes considering the fact that nothing should be written here during the reshape process?


    bonus question: is there a way to cancel the reshape process, remove the new HDD and go back to my previous 4x RAID configuration. waiting to fix this issue ans see if I get back my previous perfs?


    thank you again for your help! if you need more infos on my setup that could help, let me know!

    the probability of power loss during 10 days period is much less than during 2 months.... wich is the reshaping duration announced when cache is deactivated ;)


    btw, I still don't understand why reshaping is so slow: 3500 k/s ! it should be between 20000 and 50000

    after all the tweaking tips I've tested, what are the points I should check that could explain?