Posts by Flaschie

    Similar issue, I've been getting cron-apt mail like the following for days now (it's the same everyday, so I guess zfs-dkms is not upgraded).



    I just tried the omv-aptclean, the web GUI seems to be out of sync in regards to updates (e.g. it lists a lot of updates, but I cannot install them due to dependency issues):
    PS: Let me know if I should translate some of the below, it's basically complaining about openrc needs insserv, and in conflict with sysv-rc and sysv-rc needs insserv. I have insserv 1.14, so not sure what it is complaining about....



    If I try an apt-upgrade in SSH, it is not installing the zfs (they are held back, sorry for the lingo again but I presume the syntax is understood):

    Code
    apt-get upgrade
    Leser pakkelister ... Ferdig
    Skaper oversikt over avhengighetsforhold
    Leser tilstandsinformasjon ... Ferdig
    Beregner oppgradering ... Ferdig
    Følgende pakker er holdt tilbake:
    zfs-dkms zfs-zed zfsutils-linux
    0 oppgraderte, 0 nylig installerte, 0 å fjerne og 3 ikke oppgradert.

    Can also confirm been having this "error" for years. It comes when I look at the SMART data in the GUI (haven't tested cli), and the error is ABRT with the smart read/write as you've shown. It happens on my WD Reds, my WD Re and my WD AV-GP. And my 8 disks are still alive, with the exceptions of those who died :whistling: (although, no reason to correlate that to the error in question ;))

    This is probably not directly linked to OMV, but as there are many wise people here maybe someone can help or has experienced the same :)


    My system/power LED-indicator used to blink in OMV3 when the NAS entered suspend-mode (Autoshutdown plugin: pm-suspend). After upgrading to OMV4, this does no longer happen (I only recently upgraded). The LED is constantly on, and this is off course very annoying as I'm having problems identifying whether the NAS is on or not. As far as my googling goes, this is not software related, but something that is set in BIOS. However, as stated, it did work in OMV3, so there must have been a change somewhere. I have tried looking through the BIOS, and I cannot not find anything. I also updated to latest BIOS without success. My motherboard is a Supermicro X9SCM-F.


    Anyone having any tips how to get the blinking back? I'm very close to keep my NAS on 24/7 just to avvoid this........ ;(

    I guess not. From my understanding, there is no real raid 10 in zfs. You can create mirrors and then stripe them but I guess we don't allow you to use a drive in use anymore (I think I actually changed that). You could create the mirrors in the web interface and then create the stripe from the command line and import it in the plugin. Having to do advanced things from the command line is not unreasonable. Not *every* thing can be available from the web interface. It makes it too complicated.


    Wouldn't creating a mirror pool, and then in the web interface "Expand" it with another mirror create a RAID10-equivalent ZFS, i.e. striped mirror vdevs? I believe I did this when I created my pool some time ago (things have happened to the plug-in since then).

    Regarding #3: It's not really a fix to your problem, but I run this script regulary in addition to ZED (mostly because I found this script before I learned about ZED, but since ZFS is all about safekeeping, why not have redundancy in the error checkers? :) ) The script is from: https://calomel.org/zfs_health_check_script.html


    I have modified it to suit my needs and also to make it work properly (email), so maybe you would like to have a different version. You should change the <OMV_NAME> in the emails to your servers name. (Note, I may have forgotten to mark all my changes with the double #)


    I'm not sure this is related to ZFS directly, but is it normal to see better sequential write speeds than read speeds on a mirrored vdev pool?


    It seems my server struggles when it comes to reads sometimes, as the speed is very fluctuating even if I'm just copying large files (to an SSD on the client-side). My pool consists of 2 mirror vdevs made up by 2 WD Reds 3TB. I have compression turned on and have adjusted the ARC to 12 GB max (and 4 GB min). In the attached figures, you can see the write speed when copying some VMs to the NAS showing a more or less constant 445 MB/s. The data adds up to 40 GB (23 GB after being compressed, i.e. disk usage is 23 GB on the NAS). When trying to read back the very same files, I see a very fluctuating graph topping out at about 340 MB/s. Now, even though the data in this example are compressible, I see the same behavior for in-compressible data. When looking at CPU-usage, I tend to see this large portion waiting for IO, and my CPU-load may be very high, (especially if trying to to several copy tasks, I have seen load numbers in the 20-30s).


    I also noticed that disk usage in iostat (%util) is close to 100 % when writing, but only about 60 % when reading, so it seems the disks are not able to fully perform.


    My server has a Xeon E3-1240, 16 GB of RAM, a Mellanox 10 Gb network card (same on client side) and running Erasmus (OMV3). I use a LSI SAS 9210 for half the disks, the other half is run from the MB (to have controller redundancy as well, as a side note can this cause the strange behavior?).



    The best strategy is up to you to decide ;) Maybe you can start looking at the 3-2-1 way of having backups; which basically means to have (at least) 3 copies of your data on 2 different devices with 1 device beeing offsite, see https://www.backblaze.com/blog/the-3-2-1-backup-strategy/


    I'm using rsnapshot without any problems, but I am not copying on the same disk. The first time it runs it will run for a long time as it needs to create a copy of all your data. Then, the next times, it will run much faster as it will only need to copy any changed files. Even if assuming a speed of 100 MB/s, which is a bit high for internal copy on a single harddrive (i.e. not SSD), you will be copying for about 1.5 hours given 500 GB of data. So it is not strange that rsnapshot used hours to complete.


    I dont share my snapshot folder in Samba, if I need any files I log into my server using SSH and copy the files I want. I also have local and offsite backups to be able to restore my data in case of disk failure, fire, burglory or other inconveniences that may occur :)

    Thanks for the update, ryecoaaron and volker! :)


    But it seems the fix has made only one of my four ZFS-fs to become visible in FileSystems-tab and Performance Statistics-tab. Any chance this is a bug in the fix, or have I done something stupid on my end? The disk-usage statistics did work before the update(s).


    Some screenshots showing the issue:



    I just made a clean OMV3 install, and managed to get ZFS up and running again after installing backports kernel in OMV extras. But my "Detail" window for my ZFS-pools are light grey / low contrast (other windows in the web-gui show correct contrast, and so did this window when I had OMV2 installed).


    Couldn't find anyone else asking about this before; is this normal or is there anything I can do?


    Stumbled across this thread looking for answer to the same question: Should I not receive an email when the disk(s) reach the informal level? I have similar settings, my smartd looks similar (I have a difference at -s, mine is S/../../x/0y only, I have no L).


    I get email notifications in general, and I have recieved other SMART messages (i.e. SMART error messages). I get the information regarding temperature exceedance in the SMART log, just not getting any mail... I just tried setting the critical level below my current drive temperature, and I instantly received emails regarding the critical limit, so it is only the informal part which is not working for me.

    Hi,


    It seems I manage to give the autoshutdown-script some headache when I define an uptime range of 9..2 (i.e. 9AM to 2AM). I think this "bug" will be present for other combos as well when the end-time number is less than typical up-time numbers. So say if I set it to 9-23, the sleep function when encounting up-time period works perfectly, but whith 9-2 I get this repated in my log so many times it's difficult to find the interesting stuff (if any ;) ) :




    Basically, it spams my log with meaningless info :) However, it should still shutdown my NAS when in shutdown-range, so it's really not that big of a problem; the script still works. But is it not possible to avvoid this somehow? It seems the error is the negative TIMETOSLEEP?



    Code
    autoshutdown[32297]: DEBUG: 'FAKE-Mode: _check_clock(): CLOCKCHECK: 20; CLOCKSTART: 9 ; CLOCKEND: 2 -> forced to stay up'
    autoshutdown[32297]: DEBUG: 'FAKE-Mode: _check_clock(): TIMETOSLEEP: -19'
    autoshutdown[32297]: DEBUG: 'FAKE-Mode: _check_clock(): SECONDSTOSLEEP: 0'
    autoshutdown[32297]: DEBUG: 'FAKE-Mode: _check_clock(): MINUTESTOSLEEP: '
    autoshutdown[32297]: DEBUG: 'FAKE-Mode: _check_clock(): Final: SECONDSTOSLEEP: 0'
    autoshutdown[32297]: DEBUG: 'FAKE-Mode: _check_clock(): TIMEHOUR: - TIMEMINUTES: '
    autoshutdown[32297]: INFO: 'FAKE-Mode: System is in Stayup-Range. No need to do anything. Sleeping ...'
    autoshutdown[32297]: INFO: 'FAKE-Mode: Sleeping until : -> 0 seconds'