Posts by TheBay

    There is an alpha zfs plugin in development right now. Works well on my test box.

    XFS is working fine, no constant activity and spin down works perfectly...

    What is going on with EXT4 how it works for some and not others, this could be causing issues on some users hard drives/ssd's with wear...

    Is there any way to change the spin up from a single drive at a time to multiple drives?

    No i don't, i've just checked. Sometimes I can hear the disk spinning up when I login through ssh and start doing some CLI in the media drives. But is just a few times. My drives are always in activity mainly because torrents.

    The guy said it might take 10 minutes per 200GB in ext4 for jbd2 to write to the partition after initialization. Check the ubuntu reference i posted

    This is what I was wondering, maybe I need to leave it a few days after raid/filesystem initialisation to see if it calms down, I left it on for maybe 9hrs overnight and it was still doing it, but EXT4 can be configured to format/initialise in the background so there is less of a wait to get a large filesystem running. But doesn't really do a "quickformat" as such as it's ongoing.

    Why not xfs? I use xfs. I used to use ext4 but when Redhat Enterprise Linux 7 switched to xfs as the default filesystem, that told me it was very good. And when I switched, ext4 was still limited to 16 tb and my raid array was getting close.

    XFS is new to me, EXTx, ReiserFS, ZFS I know well.

    Interesting that you use XFS, I might try it if other users are using it successfully.

    I use ext4, btrfs is available in wheezy not in OMV. You can't mount it with the button, neither format. You have to use CLI
    But you can circumvent around with manual entries at fstab and config.xml, in that way you will have available the volumes to create shares as the drives appear to be register at OMV.
    The configuration will be lost if you press the unmount button but it should be reboot persistent. Check on the forums a person last week asked about btrfs we gave him a couple of hints.
    How much time have you wait for disk to stop? you have 12TB thats like 12 hours according to that hint

    Are you getting constant activity from jbd2? this wil be shortening the life of your drives.

    Just looked in to btrfs. although I have used Unix like os's for at least 18 years I don't think i'll just botch something via cli, I like the way OMV does everything via the WebGUI, it works well.

    Sorry I do not understand what you mean about waiting for disk to stop?

    The backports kernel is a much newer version (3.2 vs 3.16) and has newer/updated code. It may eliminate a problem that the 3.2 kernel has.

    No, ext4 doesn't do anything that takes days to initialize. If you create a mdadm array and don't let it finish syncing, it can take days to format the array.

    I just installed the backport kernel, the same thing is happening.

    On EXT4 there is constant disk writing, you can hear it happening, the sound is consistent and no spin down.
    EXT3 no issue at all, no disk writing and spindown.

    The backport kernel 3.16 is installed through omv extras in the webui, no cli

    What are the benefits of the backlog kernel?

    Also I ran a test with EXT3 everything was fine, no constant activity, slow to initialise but when mounted no issues, drives span down.
    Tried EXT4 again, very quick to initialise, constant drive activity, no spindown?

    Is it possible EXT4 can take days to initialise fully in the background or something? I.e. deferred? Hence quicker initialisation for EXT3?

    daemon.log is the biggest, but none are growing...

    This is all that's in daemon.log, this constantly.

    Also checking iotop, I see a TID with rrdcached pop up every few seconds!

    Nov 25 21:30:17 Aurora collectd[5196]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
    Nov 25 21:30:27 Aurora collectd[5196]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-free.rrd, [1416951027:34389663744.000000], 1) failed with status -1.
    Nov 25 21:30:27 Aurora collectd[5196]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
    Nov 25 21:30:27 Aurora collectd[5196]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-reserved.rrd, [1416951027:1917321216.000000], 1) failed with status -1.

    Have you tried upgrading to newer kernel?, thats the journal writing.
    Maybe is a constant log writing, check at /var/log which files grow their size.
    From there it may be an constant error outputting to log, hence the journal writing

    I haven't manually installed a kernel via the CLI but I have installed the updates on the web gui?

    I will have a look at the logs...

    ext3 and ext4 both initialze must slower than xfs. Maybe try xfs?

    Thanks for your reply,
    I am not bothered about the time it takes to initialise, I only have to initialise once, I am concerned about the constant writes from jbd2.
    Looking online there are a lot of reports of this issue with Wheezy and also Ubuntu. This is causing wear on hard drives/ssd's and also stopping them from sleeping.

    Has anyone else looked in to this issue, when did this arise with OMV?, I'm sure it's happening on every system if the user checks with IOTOP.

    jdb2 still writing with noatime on both SSD/Array...

    Seems to be an issue using EXT4, jdb2 constantly writing to my SSD and Raid 6 array, both on different controllers as well.

    Trying EXT3 on the array, EXT3 takes much longer to initialise the filesystem? EXT4 initialised the filesystem within a few seconds?

    Think this may be a bug with EXT4 causing excessive writes?

    jbd2 is causing the writes?

    UUID=c10af539-9f83-47af-a917-e4fb266d3858 /               ext4    errors=remount-ro 0       1
    # swap was on /dev/sda5 during installation
    UUID=d7f83b01-9ff5-4890-aee0-cc0ef769d9de none            swap    sw              0       0
    /dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0       0
    /dev/sr0        /media/floppy0  auto    rw,user,noauto  0       0
    tmpfs           /tmp            tmpfs   defaults        0       0
    # >>> [openmediavault]
    UUID=61a81b73-8784-4e12-a577-819503c15d8f /media/61a81b73-8784-4e12-a577-819503c15d8f ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0 0 2
    # <<< [openmediavault]

    Why isn't either noatime , nodiratime or relatime in Fstab by default?

    I have just added noatime to my SSD System drive... testing noatime on my array now.