New to OMV, drives won't go to spindown/standby/sleep

  • Hi All,


    Just migrated over to a native OMV install from ESXi with OmniOS.


    I have a Supermicro X9SCL with a Xeon 1230v2,
    Supermicro 743-856B-SQ Chassis
    32GB ECC RAM
    LSI 9207-8i in IT mode
    8x Hitachi 5K3000 2TB Drives
    1x Intel SSD for OMV OS Drive.


    I have tried setting the hard drives to spin down, but it doesn't work.
    Looking at the Activity lights on my backplane there is constant activity on my hard drives, I have not even got as far as setting up a usershare yet. Just have EXT4 Mounted on RAID 6 using the GUI settings.


    Any help will be gratefully appreciated as this looks an excellent bit of software, especially with the Virtualbox plug in.


    Dale

  • Are you sure one of the plugins is not writing to the array? As you probably know, if anything is written to the raid array, all of the drives will spin up.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I don't have my arrays set to spindown but I know others have gotten their arrays to spindown. So, maybe it is a missed setting somewhere?

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Ok. Deleted File system, delete RAID array.
    Rebooted, re-created RAID array. All drives spin down, they spin up staggered individually, not sure if that's right or not? (My LSI spins pairs up during boot)


    Will try and create a file system to see what's causing them to stay awake...

  • jbd2 is causing the writes?


    Code
    UUID=c10af539-9f83-47af-a917-e4fb266d3858 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sda5 during installation
    UUID=d7f83b01-9ff5-4890-aee0-cc0ef769d9de none swap sw 0 0
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    /dev/sr0 /media/floppy0 auto rw,user,noauto 0 0
    tmpfs /tmp tmpfs defaults 0 0
    # >>> [openmediavault]
    UUID=61a81b73-8784-4e12-a577-819503c15d8f /media/61a81b73-8784-4e12-a577-819503c15d8f ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
    # <<< [openmediavault]



    Why isn't either noatime , nodiratime or relatime in Fstab by default?


    I have just added noatime to my SSD System drive... testing noatime on my array now.

  • jdb2 still writing with noatime on both SSD/Array...


    Seems to be an issue using EXT4, jdb2 constantly writing to my SSD and Raid 6 array, both on different controllers as well.


    Trying EXT3 on the array, EXT3 takes much longer to initialise the filesystem? EXT4 initialised the filesystem within a few seconds?


    Think this may be a bug with EXT4 causing excessive writes?

  • ext3 and ext4 both initialze must slower than xfs. Maybe try xfs?


    Thanks for your reply,
    I am not bothered about the time it takes to initialise, I only have to initialise once, I am concerned about the constant writes from jbd2.
    Looking online there are a lot of reports of this issue with Wheezy and also Ubuntu. This is causing wear on hard drives/ssd's and also stopping them from sleeping.


    Has anyone else looked in to this issue, when did this arise with OMV?, I'm sure it's happening on every system if the user checks with IOTOP.

  • Have you tried upgrading to newer kernel?, thats the journal writing.
    Maybe is a constant log writing, check at /var/log which files grow their size.
    From there it may be an constant error outputting to log, hence the journal writing


    I haven't manually installed a kernel via the CLI but I have installed the updates on the web gui?


    I will have a look at the logs...

  • daemon.log is the biggest, but none are growing...


    This is all that's in daemon.log, this constantly.


    Also checking iotop, I see a TID with rrdcached pop up every few seconds!


    Code
    Nov 25 21:30:17 Aurora collectd[5196]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
    Nov 25 21:30:27 Aurora collectd[5196]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-free.rrd, [1416951027:34389663744.000000], 1) failed with status -1.
    Nov 25 21:30:27 Aurora collectd[5196]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
    Nov 25 21:30:27 Aurora collectd[5196]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-reserved.rrd, [1416951027:1917321216.000000], 1) failed with status -1.
  • The backport kernel 3.16 is installed through omv extras in the webui, no cli


    What are the benefits of the backlog kernel?


    Also I ran a test with EXT3 everything was fine, no constant activity, slow to initialise but when mounted no issues, drives span down.
    Tried EXT4 again, very quick to initialise, constant drive activity, no spindown?


    Is it possible EXT4 can take days to initialise fully in the background or something? I.e. deferred? Hence quicker initialisation for EXT3?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!