Hard Disk write every 5 seconds after redimension the file system

  • good morning,

    I have a problem with disk usage after expanding the RAID and the File System (via WebGui).

    now, as you can see in the image, the disk is always in constant use and I can't understand why.

    I'm new to OMV and I don't really know what to do.


    Before expanding the raid the disks goes to spinndown but now no more.

    do you have any advice?

    Please help me :(


    Thanks so much.

  • Askbruzz

    Hat den Titel des Themas von „Hard Disk usage“ zu „Hard Disk write every 5 seconds after redimension the file system“ geändert.
  • You can log hard drive access under Linux:


    Code
    echo 1 > /proc/sys/vm/block_dump


    The output then ends up under:

    Code
    /var/log/kern.log


    or


    Code
    /var/log/syslog


    and this is how you switch it off again:

    Code
    echo 0 > /proc/sys/vm/block_dump

    [LibreELEC @ 2x RPi3, CoreELEC @ S12 Octa Core]

    [ NAS OMV 5.xx (Usul) @ NanoPI M4 ]

    [ Nextcloud 18.0.4 @ ODROID C2 ]

    [ Motioneye @ RPi4]

  • Thanks you so much.


    I have found this:


    What can be? :D

  • I'm afraid I can't help you, we hope that one of our experts will have an answer :)


    But Check the warning at the bottom of this page

    [LibreELEC @ 2x RPi3, CoreELEC @ S12 Octa Core]

    [ NAS OMV 5.xx (Usul) @ NanoPI M4 ]

    [ Nextcloud 18.0.4 @ ODROID C2 ]

    [ Motioneye @ RPi4]

    Einmal editiert, zuletzt von Aux ()

    • Offizieller Beitrag

    Can be a valid option to reinstall OMV?

    Not necessarily, if you google the information from the log it has something to do with the recent raid expansion, what is the condition of the drives, i.e. any bad sectors being referenced in smart, any smart errors particularly 5, 197, 198. You might need to run fsck on the raid itself.

  • Not necessarily, if you google the information from the log it has something to do with the recent raid expansion, what is the condition of the drives, i.e. any bad sectors being referenced in smart, any smart errors particularly 5, 197, 198. You might need to run fsck on the raid itself.

    Hello and Thanks :)

    I have checked and, i think, there is no SMART error.

    The 4 disks used to expand the raid was used in a Synology NAS and before expand the OMV raid i have only formatted the disk in rapid mode.


    fsck i have no idea how to use it, i'm a completly noob with OMV ;(

    • Offizieller Beitrag

    I have checked and, i think, there is no SMART error.

    That always makes me :) when that is quoted, you need to check under Storage -> Disks -> SMART and run a Long self test on each drive and review the output.

    The 4 disks used to expand the raid was used in a Synology NAS and before expand the OMV raid i have only formatted the disk in rapid mode.

    When you say formatted I assume you mean wipe and judging by that log you now have 9 drives in that raid 6.


    What's the output of cat /proc/mdstat

    fsck i have no idea how to use it,

    What you would do is to run fsck /dev/md0 what is fsck

  • That always makes me :) when that is quoted, you need to check under Storage -> Disks -> SMART and run a Long self test on each drive and review the output.

    Thanks, make a long test now :)

    When you say formatted I assume you mean wipe and judging by that log you now have 9 drives in that raid 6.


    What's the output of cat /proc/mdstat

    Here is the output:



    What you would do is to run fsck /dev/md0 what is fsck

    Here is the output:

    Code
    root@NasOMV:~# fsck /dev/md0                                                                                                                                                                                                                                                                                                                                                                fsck from util-linux 2.29.2                                                                                                                                                                                                                                                                                                                                                                 e2fsck 1.44.5 (15-Dec-2018)                                                                                                                                                                                                                                                                                                                                                                 NasData: clean, 150847/1281738752 files, 7788695376/20507820032 blocks                                                                                                                                                                                                                                                                                                                      root@NasOMV:~#                                                                                                                                                                                                                                                                                                                                                                              
  • The output from both, mdstat tells you the raid is active and not rebuilding, fsck shows there are no file system errors on the array, the next is to confirm the state of each drive. Any media servers running i.e. Plex, Emby.

    I have Plex running in a docker but now all my docker are stopped.


    Thanks for your help :)

    • Offizieller Beitrag

    Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)


    According to what I have read this will stop, but due to the number of drives in the array it could take some time.

  • Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)


    According to what I have read this will stop, but due to the number of drives in the array it could take some time.

    Thanks you so much.

    I hope this will end soon :D

    I try to wait a couple of days ans see what happen :3:)^^

    Now are 2 days that this thing doesen't stop.

  • Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)


    According to what I have read this will stop, but due to the number of drives in the array it could take some time.

    Sorry, but one more question, it's normal that the block writed are ever the same?

    • Offizieller Beitrag

    Sorry, but one more question, it's normal that the block writed are ever the same?

    I don't know, but what you have to remember in a raid setup that it's written across all drives, most home users don't need to use a raid option there other ways to store data. Raid options are driven by hardware vendors and most users think that that is the norm, if someone wants to use a software raid they need to understand how it works and how to recover from it.

    What you have set up is something I would never ever consider, nine drives in a single array, mergerfs and snapraid would be a better choice.

    You say the nas is noisy, I take it that is from the drives, if so it's something I experienced some time ago, but that was from old hardware and older drives.

  • I don't know, but what you have to remember in a raid setup that it's written across all drives, most home users don't need to use a raid option there other ways to store data. Raid options are driven by hardware vendors and most users think that that is the norm, if someone wants to use a software raid they need to understand how it works and how to recover from it.

    What you have set up is something I would never ever consider, nine drives in a single array, mergerfs and snapraid would be a better choice.

    You say the nas is noisy, I take it that is from the drives, if so it's something I experienced some time ago, but that was from old hardware and older drives.

    I understand your point of view, but when someone it's newbie like me the error are easy to do.


    the only problem it's that the nas now it's noise because the drivers write every 5 second, before to expand the raid the nas was silent and the drives going to sleep.

    :)


    I think now it's to late to make change :(

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!