Beiträge von raisOr

    I have been running several NAS with fully encrpyted root, home and swap partitions for some years now.

    Also the RAID that contains the actual data of the NAS is fully encrypted.


    I run this setup for reasons already mentioned in this thread:

    - physical theft of the machine

    - I do store the keyfile for the RAID on the root fs (so that the RAID can be unlocked during boot via /etc/crypttab automatically)

    -> so I do not want this keyfile to sit around in an unencrypted root fs (when the machine is off)


    I did create my own little step-by-step guide on how to do this.

    And this guide was working for years on different OMV versions, but unfortunately it now fails in OMV6 (some problem during update-initramfs which leads to cryptsetup not being included in the boot image -> unbootable machine after the installation of OMV).


    If somebody is interested in the stept-by-step I could post it here.

    Maybe somebody would then be able to help resolve the issue in OMV6...

    Hello everyone,


    I have a question about the frequency of the "CRONT-APT" eMail that OMV is sending out.


    I have enabled the notification settings for "Software Updates", so that OMV will send me an eMail whenever there are updates available.
    I now get these eMails twice every single night (onc CRON-APT seems to run around 00:30 and one around 04:30).


    Is there any way (besides probably messing around in the crontabs directly) to change the frequency of these eMails ?
    I would like to receive them once a week on a specific day at a specific time, so that I can plan for the implementation of the updates upfront.


    Thank you!

    UPDATE:


    With the help of @Sc0rp I could fix my issue:


    As recommended by Sc0rp: Run all the cmds in the CLI since the webif mit not always run the correct commands with all the required switches


    1. Mark the defective drive as faulty: mdadm /dev/md<x> -f /dev/sd<Y> or mdadm --manage /dev/md<x> --fail /dev/sd<Y>

    2. Remove the defective drive from the RAID: mdadm /dev/md<x> -r /dev/sd<Y> or mdadm --manage /dev/md<x> --remove /dev/sd<Y>

    3. Shutdown the server and remove the drive
    ...send the HDD to the seller (still had warranty on it) and wait for a few weeks :)
    The seller forwarded the drive to the manufacturer, and they confirmed it was indeed a hardware issue in the drive
    4. Install new replacement drive and boot the server
    5. Add the new drive to the RAID by issuing: mdadm /dev/md<x> -a /dev/sd<Z> or mdadm --manage /dev/md<x> --add /dev/sd<Z>
    6. cat /proc/mdstat should now show that mdadm is rebuilding the RAID
    ...wait for another 5-6h for the rebuild to complete...


    7. Check in OMV webif for the RAID status and it changed back to clean



    To be absolutely sure everything was fine in the RAID is also ran


    /usr/share/mdadm/checkarray /dev/md<x>


    afterwards.



    Before running checkarray


    cat /sys/block/md<x>/md/sync_action


    would output idle


    which changes to check while the check is being done



    The check can also be paused using /usr/share/mdadm/checkarray -x /dev/md<x> and continued by using /usr/share/mdadm/checkarray -a /dev/md<x>



    Once again thanks for the great help here in the OMV Forum and especially to "Mr. RAID" @Sc0rp ;)


    Update 20200605:

    Added section about removing the file and also added long versions of the commands

    The machine is running 3.0.94 (Erasmus), so I guess this should be correct version ?


    Connecting the machine to the internet is not an option for us (for security reasons).


    So I guess I'll just manually install the plugin (using the "Upload" button in the "Plugins" area of the OMV webif, I guess) and I'll check from time to time whether there is a new version of the plugin package and then re-do the same process.


    Regarding the docker image pulling topic:
    - I use a VM on my machine that runs a Debian to pull the docker image (docker pull <image>)
    - I then export that docker image into a file (docker save -o <outputfile.docker> <image>)
    - The file then gets ssh'ed into the machine w/o the internet connection
    - Import the docker image on the "airgapped machine" (docker load -i <outputfile.docker>)


    Works like a charm so far, I am already running some docker containers on the "airgapped machine".


    UPDATE:
    Got it working by doing it like I wrote above.
    The Docker plugin is working correctly, I can see all the docker containers which are running on the machine (can also start/stop them, display logs, etc.)
    When you are in the Docker Plugin webif from time to time there is a Error Pop-Up Message "communication failure" which I guess is coming from the plugin when its trying to pull a list of docker images from the docker repo.
    Apart from that everything else seems to work.

    Hello to all,


    I have installed docker-ce via CLI on an OMV machine that does not have any internet connectivity already.
    The machine is getting its packages/updates from a local apt-mirror that is running on another machine.


    Now I want to enable the docker gui plugin so I can check the state of the different docker images that I am running via the OMV webif.
    When I enable the "Docker CE repo" and save the changes, I get the following error message:


    Since the machine does not have any internet connection, it cannot pull the GPG keys for the new repo that it tries to enable.


    Is there any way for me to
    - download the required keys on another machine, upload in the OMV machine and manually add them to the keyring via the CLI ?
    - enable the "Docker CE repo" on the OMV machine somehow ? E.g. by manually creating a new apt-sources.list file ?


    Thanks you in advance!

    Hallo Schnuffer,


    du kanst dir auch mal den folgenden OMV thread anschauen.


    Ich hatte das gleiche Problem (RAID5 mit 4x4TB erweitert auf 6x4TB was >16TB wurde).


    Ich konnte damals das bestehende ext4 auf 64-bit umstellen (ohne Datenverlust, aber keine Garantie!!!) und die neuen HDDs in den RAID einfügen.

    According to /etc/cron.d/mdadm, checkarray should get executed on each first Sunday of the month and I did not notice any issues with the data on the NAS so far.
    I suspect the issue has been there for at least a few weeks since I remember wondering about the "small size" of the RAID a few weaks ago when checking something on the webif.


    I am currently running smartctl -t long /dev/sde to see if this sheds more light on the issue with the disk (I'll post the output once its done).


    Some more information on my RAID setup (I don't know if this complicates things further): I built an encrypted RAID5 as documented in this OMV thread (basically mdadm -> LUKS -> ext4).


    Could you please elaborate a little bit more on how I can
    - remap the defective sector
    - the steps to take in order to a) take the RAID offline b) re-add the disk that was taken out by mdadm c) resync the raid (I guess that should be the sequence)
    since I am not an expert on those things ;)


    Thank you in advance!

    Hello tkaiser,


    full output of smartctl -x /dev/sde is listed here.


    Running dmesg | grep sde shows a lot of messages like the one below


    Hello to all,


    today I noticed that my RAID5 (6x4TB Western Digital Red) had gone into status "clean, degraded".
    I then checked in mdadm and saw that one of the drives had been removed from the RAID by mdadm.


    I proceeded to run some short selftests using smartctl -t short /dev/sd<x> on all drives and they came back as


    Code
    smartctl 6.4 2014-10-07 r4002 [x86_64-linux-4.9.0-0.bpo.4-amd64] (local build)
    Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org
    
    
    === START OF READ SMART DATA SECTION ===
    SMART Self-test log structure revision number 1
    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Short offline       Completed without error       00%      6770         -

    on all the drives except for the one that has been removed from the RAID5:

    Code
    === START OF READ SMART DATA SECTION ===
    SMART Self-test log structure revision number 1
    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Short offline       Completed: read failure       70%      5158         9
    # 2  Short offline       Completed: read failure       70%      5158         9


    I think the drive might be dead already or is about to fail, so it does not make sense to try to "fix" the RAID using mdadm right now ?
    The drive only has 5158hrs (214days) on it so I think its a hardware failure of the drive ?


    My plan of action would be:
    - get a replacement for the drive (I still got warranty on it)
    - shutdown the NAS and replace the faulty drive
    - recover the RAID5


    Could you guys please give me your thoughts on the drive / plan of action above ?


    Thank you!

    Hello,


    I have been noticing the same problem.


    The issue started in Firefox Nightly a few weeks ago (currently installed version: 55.0a1).
    It does not have any add-ons except DownThemAll and AdBlock Plus.


    Then 1-2 weeks back the issue also started happening in the "normal" Firefox (installed version: 52.0.1).
    This install is also running the above plugins.


    The only version that is still working for me right now is Firefox ESR (installed version: 45.8.0).
    No plugins running in this one.


    Since the issues started to happend from "newer" to "older" firefox streams, I guess it must be a change done in firefox itsself.

    Hello doron,


    following your advice, I have built a Virtual Machine of the OMV setup that I have.
    I could then sucesssfully test the cookbook and extend the RAID5 in the VM.


    When growing my RAID on the actual server, everything behaved like in the VM, except that I ran into some problems with a limitation in the ext4 (for details see post #3 in this thread, under section 6.).


    I could successfully fix that using the steps described in 6.1 and 6.2 and have now successfully extended my RAID5 with the 2 new drives.


    I have updated the post #3 so it now documents the steps that I have taken to successfully extend my RAID5.


    Special Thanks @doron for not only supporting me with the initial encrypted setup a while back, but also for again giving me great input in this case.

    Hello doron,


    thanks for the input!


    I was not aware that I also had to resize the LUKS, after some research I came up with the following cookbook so far.


    Unfortunately currently I do not have a testsystem on which I can try this, in the other OMV NAS that I have all bays are already filled with disks :)


    Update 20170312: I was able to successfully grow my RAID5 using the following cookbook


    1. Umount filesystem of the RAID5
    umount /media/<xyz>


    2. close the crypto device
    cryptsetup luksClose <crypt device> (taken from /dev/mapper/<crypt device>)


    3. Add new drives to RAID5 (before: 4x4TB, after:6x4TB, new drives: /dev/sdbX and /dev/sdbY)
    mdadm --add /dev/mdX /dev/sdbX /dev/sdbY
    mdadm --grow --raid-devices=6 --backup-file=/root/grow_mdX.bak /dev/mdX


    Reshape of the RAID5 with the 2 new drives took around 26hrs to complete. Progress can be checked using cat /proc/mdstat


    4. open the crypto device
    cryptsetup luksOpen -d /path/to/keyfile /dev/mdX <crypt device>


    5. resize the crypto device
    cryptsetup resize <crypt device>


    6. check and resize the ext4
    e2fsck -fn /dev/mapper/<crypt device>
    resize2fs -p /dev/mapper/<crypt device>


    Note:
    The size of the grown RAID5 would be ~19TB, therefore resize2fs gave the following error on my machine:
    resize2fs: Size of device /dev/mapper/<crypt device> too big to be expressed in 32 bit susing a blocksize of 4096.


    It seems like new ext4 filesystems automatically get created using the -O 64bit option, therefore allowing them to have maximzum size of1024 PiB.
    If the ext4 has been created in "32bit" (which was the case on my OMV machine), then the maximum size of the filesystem is limited to 16TiB.


    This topic has been adressed in e2fsprogs Version 1.43, unfortunately it seems this version has not made it yet into the jessie repository.
    When running dpkg --list | grep e2fsprogs it would report e2fsprogs 1.42.12-2+b1 on my OMV machine.


    In order to adress this, you have to build the e2fsprogs Version 1.43 from its sources and then resize the filesystem by running the following commands with the new e2fsprogs version


    6.1 download and build e2fsprogs Version 1.43


    apt-get install git
    cd /usr/local/src
    git clone git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git
    cd e2fsprogs
    ./configure
    make


    6.2 resize the filesystem using e2fsprogs Version 1.43


    cd /usr/local/src/e2fsprogs/
    ./resize/resize2fs
    Check whether resize2fs reports version "1.43.5-WIP" (or similar newer version)


    ./resize/resize2fs -b /dev/mapper/<crypt device>
    This will convert the "32bit ext4" to a "64bit ext4". It took approx. 10mins to complete.


    ./resize/resize2fs -p /dev/mapper/<crypt device>
    This will resize the ext4 to the size of the RAID5. It also took around 10min to complete.


    ./e2fsck/e2fsck -fn /dev/mapper/<crypt device
    Check the filesystem for any errors.


    7. mount the ext4 filesystem
    mount /dev/mapper/<crypt device> /media/<xyz> 


    Note:
    It seems like systemd was unlocking and automounting my RAID5 during the procedure from time to time. I did not investigate further into this, but I checked after each step that the filesystem was unmounted and the luks device was closed. When it was unlocked/mounted I would just run the command in 1. and 2.
    Luckily it did not break anything when I grew my RAID, but maybe somebody has some input in how to prevent systemd (or whatever service was causing this behaviour) from unlocking/mounting the filesystem.


    Regards,
    raisOr

    Hello to all,


    I have managed to setup several full encrypted OMV installations using this OMV wiki article.


    The wiki contains some caveats that gave me some headaches during the installations.
    I would like to update the article to help other users to avoid these obstacles.


    How can I can get the required access, or how is the process for updating wiki articles ?


    Thank you!

    Hello OMV-Forum,


    I have setup a fully encrypted OMV installation using this OMV wiki as well as this OMV thread.


    Used version is OMV 3.0.59 (Erasmus).





    Basically the RAID5 setup is (as I understand)


    ext4 on /dev/mapper/abc


    LUKS on /dev/mdxxx creates /dev/mapper/abc


    RAID (mdadm) creates /dev/mdxxx




    Currently the RAID5 is comprised of 4x 4TB HDDs (/, /boot, swap, etc. are stored on a separate SLC-USB-Stick).
    I want to extend the RAID5 with another 2x 4TB HDDs, so it will be a total of 6x 4TB in the end.


    Which would be the steps to be taken to add the two new drives to the RAID ?


    I am no expert on this, but I guess I would need to
    - add the HDDs to the RAID (mdadm)
    - no changes should be necessary for the LUKS since it operates on the /dev/mdxxx ?
    - extend the ext4 to the full size of the RAID


    Could somebody tell me which steps to take (either in the webif or in the CLI) ?
    Also should I add both HDDs at once or add one and then when its finished add the second one the same way ?


    Thank you very much in advance!


    raisOr

    It is possible; I do it. But it calls for all-manual setup.
    Basically, what I did is create a RAID array, then encrypt the entire md (i.e. the RAID block device) using LUKS, rather similar to what's described in the <a href="http://wiki.openmediavault.org/index.php?title=Encrypted_OMV_installation_with_aes-xts-plain64_cipher,_random_key_for_swap_and_exposing_the_rest_of_boot_disk_to_store_data" class="externalURL" rel="nofollow" target="_blank">wiki page you quoted</a>, and then create…


    @doron
    I managed to setup an encrypted installation (root and swap encrypted) using the wiki article.
    I would be very much interested in setting up an encrypted RAID6 now, can you please post the steps (cmds) to be taken here ?


    Thank you in advance!