Posts by AleMagna

    Yesterday I turned off the RAID5 array in the initialization phase (cleaning)
    Today turning on I find on OMV > RAID > panel that the disks are in PENDING state.


    I solved with this guide:
    https://blog.sleeplessbeastie.…ing-resync-on-raid-array/


    In practice it was sufficient to give this command:
    mdadm --readwrite /dev/md0 (md0 is mine, you write your md... )
    to restart the cleaning



    Why this command is not present in the OMV menus?
    Would not it be useful to insert it?

    How to clear up pending resync on RAID array


    This problem can be identified by inspecting kernel ring buffer and array states. Notice that background reconstruction started on the md1 array, but it is in auto-read-only state and resynchronization is pending.


    # dmesg
    [...]
    [ 1.312811] md: md0 stopped.
    [ 1.313511] md: bind<sdb1>
    [ 1.313601] md: bind<sda1>
    [ 1.314335] md: raid1 personality registered for level 1
    [ 1.314572] md/raid1:md0: active with 2 out of 2 mirrors
    [...]
    [ 1.516790] md: md1 stopped.
    [ 1.517457] md: bind<sdb2>
    [ 1.517545] md: bind<sda2>
    [ 1.518947] md/raid1:md1: not clean -- starting background reconstruction
    [ 1.518949] md/raid1:md1: active with 2 out of 2 mirrors
    [...]


    # cat /proc/mdstat
    Personalities : [raid1]
    md1 : active (auto-read-only) raid1 sda2[0] sdb2[1]
    4203104 blocks super 1.2 [2/2] [UU]
    resync=PENDING
    md0 : active raid1 sda1[0] sdb1[1]
    973524352 blocks super 1.2 [2/2] [UU]
    unused devices: <none>


    Execute following command to switch array to read-write state and begin resync process.
    # mdadm --readwrite /dev/md1


    Transition will be immediately visible by inspecting array states.
    # cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sda2[0] sdb2[1]
    4203104 blocks super 1.2 [2/2] [UU]
    [=>...................] resync = 8.1% (333952/4203104) finish=1.3min speed=47707K/sec
    md0 : active raid1 sda1[0] sdb1[1]
    973524352 blocks super 1.2 [2/2] [UU]
    unused devices: <none>


    The end result can be verified after synchronization process finishes.
    # cat /proc/mdstat
    Personalities : [raid1]
    md1 : active raid1 sda2[0] sdb2[1]
    4203104 blocks super 1.2 [2/2] [UU]
    md0 : active raid1 sda1[0] sdb1[1]
    973524352 blocks super 1.2 [2/2] [UU]
    unused devices: <none>
    Transition messages will be stored in kernel ring buffer.


    # dmesg
    [171485.722209] md: md1 switched to read-write mode.
    [171485.722895] md: resync of RAID array md1
    [171485.722898] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
    [171485.722901] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
    [171485.722905] md: using 128k window, over a total of 4203104k.
    [171558.652298] md: md1: resync done.
    [171558.764453] RAID1 conf printout:
    [171558.764457] --- wd:2 rd:2
    [171558.764460] disk 0, wo:0, o:1, dev:sda2
    [171558.764463] disk 1, wo:0, o:1, dev:sdb2


    fonte: https://blog.sleeplessbeastie.…ing-resync-on-raid-array/

    From dvd


    I read on web:
    "NVME support is much better from 3.19 kernels, but Debian 8 only ships with 3.16. Additionally the release version of grub2 has an install bug where it doesn’t properly recognise NVMe devices. Let’s pull a more recent kernel and grub-efi from jessie-backports to resolve!"


    ... and now ?

    Scusa per la risposta in ritardo ma non ho ricevuto notifica del tuo commento!
    Per prima cosa grazie per l'attenzione prestata al mio annuncio :-)
    Il ServerOne è relativamente costoso in quanto indirizzato a utenti con grandi ambizioni nel mondo dei Server NAS. Solo il "case" entro cui è assemblato è di classe "enterprise" e di valore elevato, poi anche per l'ampia possibilità di espansione e scalabilità dei servizi. Comunque è in fase finale di realizzazione e prossimo alla vendita un ServerOne con un "case" meno ambizioso e quindi di costo inferiore che puoi vedere qui in anteprima. Inoltre a breve inserirò anche in "case" modello PC con appositi "caddy multibay" lo stesso hardware e software a prezzi ancora inferiori senza nulla perdere delle qualità complessive del ServerOne. Nei "case" PC del ServerOne ci sarà anche un prodotto espressamente "All in One" ovvero un PC e un Server Nas integrati in un unico "case". Mano a mano che saranno pronti inserirò ulteriori commenti per farli conoscere.

    Laser Office sas di Rimini annuncia la disponibilità del nuovo Server NAS della serie ServerOne realizzato con componenti di assoluta qualità e dotato di una notevole potenza computazionale per soddisfare appieno tutti i compiti affidatogli. Quindi non un semplice NAS bensì un computer centralizzato per eseguire contemporaneamente diversi compiti senza penalizzare le performance di ognuno. Ad esempio condividere documenti, giochi, foto, musica e video in una rete locale e/o su internet, fare da archivio di backup e server DLNA e DAAP per la diffusione di films e audio streaming in rete locale, effettuare download tramite vari protocolli FTP/BiTTorrent/eDonkey, fino a trasformarsi in web/email hosting e molto altro.

    It is 3 day I try unsuccesfull


    On System > Network > Interfaces > + ADD
    I setup WiFi OnBoard (cable already OK) and on Apply modification ... wait wait wait ... and error:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; monit monitor collectd 2>&1' with exit code '1': Socket error -- Connection timed out Cannot connect to the monit daemon. Did you start it with http support?


    Errore #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; monit monitor collectd 2>&1' with exit code '1': Socket error -- Connection timed outCannot connect to the monit daemon. Did you start it with http support?' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/php/openmediavault/system/monit.inc(115): OMV\System\Process->execute()#1 /usr/share/php/openmediavault/system/monit.inc(82): OMV\System\Monit->action('monitor', false)#2 /usr/share/openmediavault/engined/module/collectd.inc(81): OMV\System\Monit->monitor()#3 /usr/share/openmediavault/engined/rpc/config.inc(189): OMVModuleCollectd->startService()#4 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(150): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)#7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(528): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusR3...', '/tmp/bgoutputGa...')#8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(151): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))#9 /usr/share/openmediavault/engined/rpc/config.inc(208): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)#10 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array)#11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#12 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)#13 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)#14 {main}


    What it is, timeout ??? never happened before!!!


    follow the screenshot diagnostic log

    No backup, no data important on RAID5
    I reinstalled OMV3 on /dev/sdd and RAID5 was already ready from first installation.
    Now, there is a way to enter /dev/md0 in table in Storage > File system or not ?
    IF not ... I destroy and create again RAID5 array