Posts by edogd

    I was updating my crypttab to include a new encrypted disk that I added (/dev/sdd). I already have 3 other encrypted /dev/sda, sdb, and sdc.

    I ran update-initramfs -u -k all and rebooted my machine.

    Normally upon rebooting I get a prompt for the password to decrypt /dev/sda through /dev/sdc.

    This time the prompt came up for /dev/sda but an error also popped up saying /dev/sda1 is not a valid LUKS device.

    No matter how many times I typed in my password it would not accept it ( typed it really carefully too and rebooted many times). After 3 times the system told me I had tried too many times and continued to boot.

    Another password prompt came up, worded slightly different and with a different text font because I believe the system passed the initramfs phase, for /dev/sda. This time when I typed my password it worked and continued to boot the machine.


    There is something wrong going on with my initramfs boot. Because even though the system mounted the encrypted drive some things were wrong. The /dev/sda drive was not showing up in the OMV encrypted disks panel, even though I could see all my data at the linux prompt.


    When I ran [tt]lsblk[\tt] it showed that /dev/sda had 2 partitions /dev/sda1 and /dev/sda2. This is not correct the drive is partitioned as one entire linux LUKs partition. The other drives /dev/sdb through /dev/sdd which are also LUKs drives don't show any partitions. I have checked in gpartd and fdisk to confirm that there is only 1 partition on /dev/sda.


    I have run [tt]update-initramfs[\tt] multiple times and even removed the new drive that I added. But still /dev/sda is booting with the error showing that it has a /dev/sda1 & 2 partition.


    I have run testdisk on the drive to check the partition table and it says it's ok.


    When I boot form a SystemRescueCD image [tt]lsblk[\tt] shows the only one partition for /dev/sda as /dev/sda.


    This problem has also broken my remote ssh unlocking of the drives via dropbear.


    Is something wrong with the linux image/kernal used while making initramfs? what else can I check ?


    Please any ideas would be greatly appreciated.

    Yes it must be a specially ro mount that is added for the .snapshot directory to appear in my school and work accounts. Nevertheless the underlying functionality is present in OMV's rsnapshot.


    Thanks for the help!

    Thanks Macom. That helps a lot.


    Regarding the .snapshot folder. I guess my question was not clear, I had a typo in it. I have set up rsnapshot with a source and desitination directory. And I can see rsnapshot made a snapshot of my data in the destination directory.


    I was just referring to other linux systems I work on like in school or at work, we have a .snapshot folder present inside our source directories. If we need to fetch something from the snapshot, then we go into that .snapshot folder and pull it (as noted in rsnapshot-HOWTO.en.pdf at https://github.com/rsnapshot/rsnapshot ) Is there a way to enable this feature in the rsnapshot in omv?

    I am little confused about how rsnapshot works. I have some basic questions I hope someone can help me with.


    * Does the rsnapshot used by OMV only use hard links?

    * Do the rsnapshot snapshots take up addition disk space, at least for the initial snapshot? I see that my rsnapshot share is just as big as the sum of all the shares that I am using rsnapshot on.

    * On on the systems there is normally a .snapshots directory in the directories that rsnapshot is backing up. Where is that in the OMV setup?

    unionFS or rather mergerfs disables file caching so as not to double cache the file system (see the github page for mergerfs). This impacts things like databases and programs that need mmap functionality. So it's recommended not to place docker databases via mergerfs path. Use a direct path to branch drives instead or place the dockers on a none mergerfs disk.

    Thank you. I set the hourly number to zero. And it seems to be working there are no more hourly snapshots. However I do see that the script is still running every hour and waking up the discs. The system logs confirm this and I hear the hard drive spin up each hour even though nothing is accessing the machine.. Is there a way to stop the script from running hourly?

    I am having problems getting my snapraid diff script email summaries because they are too big.

    This is the error that I get in mail.log


    Oct 1 23:39:15 myserver postfix/master[1527]: daemon started -- version 3.4.14, configuration /etc/postfix

    Oct 2 00:00:42 myserver postfix/postdrop[6491]: warning: uid=0: File too large

    Oct 2 00:00:42 myserver postfix/sendmail[6490]: fatal: root(0): message file too big


    I was wondering is there an alternative to sending emails to a email account on my ISP or Gmail.

    Can I redirect emails to come to my omv machine directly or some how direct the email notifications to some sort of folder repository locally on my omv?

    I installed the disk stats plugin, but I am not sure where to observer the output. I went to Dashboard -> System Information -> Performance Statistics Disk I/O and usage. But those tabs are blank.

    I am running my OMV 5 setup using a SanDisk 64GB Ultra Fit USB 3.1 Flash Drive - SDCZ430-064G-G46 USB stick. I have installed the flash memory plugin. But I did not run the "optional" steps listed in the plugin's tab.


    I have noticed that my usb stick is very hot. I actually bought 2 of these sticks and tried the OS on both sticks. Both run hot.


    * Has anyone observed something similar?

    * How can I confirm that the flash memory plugin is working?

    * How can see how much writing to the USB is occurring either on the OS partition or to swap partition?

    * Do I need to perform the "optional" steps? Why are they optional?

    * With this plugin is swap still being written to the USB stick.


    BTW I am running and Asrock j5005 itx board with 16GB of memory.


    Thanks in advance.

    I need some clarification regarding what needs to be setup after installing snapraid and setting up the disks. In the TechoDAD videos he says to add schedule job for running snapraid sync. However, in the SnapRAID services page there is a diff scripts setting section and diff script schedule button. I clicked the diff scripts schedule button. I noticed afterwards that it added a scheduled job for /usr/sbin/omv-snapraid-diff that runs daily.


    Do I need both jobs running? Is the diff script job added by the diffscript schedule button enough to perform syncs, check for bit rot, and fix errors?

    Are there any other jobs I should schedule for snapraid?


    Thank you for you help.

    I have created a unionfs pool called fred-pool. However the mount point at the CLI is /srv/{some long string of numbers} How I don't see anywhere in the web gui to set this to a nicer mount point name like /srv/fred-pool.

    How can I do this.


    Thanks.