Posts by auanasgheps

    auanasgheps this change you accepted doesn't make any sense. It is adding the config path argument but the variable is never populated. With the diff script in the plugin, it is populated by a config file passed as argument to the script call because multiple snapraid arrays are supported in the v7 plugin (and v6.2.1). Each array has its own config file.

    I'll add instructions how to configure it. Additionally, if here's only one array we should be fine I guess.

    niemer run cat /proc/meminfo and see how much you have left.

    No, just delete the snapraid files - parity and content. You can find them in the root of each Snapraid drive, or at least one of those files.

    Then start a new sync, it will obviously take some time, but it's legit.

    You might want to consider my script linked in the signature, it makes SnapRAID management a lot easier.

    Thanks for confirming Soma !

    Since I have only one slot, I'll do everything "offline" using a live distro: by copying the image on a HDD and then restoring it to the destination NVMe.

    The patient must be sleeping during the procedure so I don't mess anything up :)

    My hope is I do not have to change ANYTHING after the restore!

    Hey, the new drive does not run OMV.

    I am not replacing the system drive! OMV is happily running on a thumbdrive.

    This NVMe drive is my apps/docker disk, but is heavily referenced in OMV. That's why I need to preserve UUID/paths

    I just wanted to say thanks for the great work. ALWAYS!

    In the past I've expressed my gratitude by donating to both votdev and ryecoaaron.

    One thing I'd love to be in a future release is the USB Backup plugin... with Borg. I'm using the current one that makes a simple plain copy on an external drive, and that's okay. Would be great to empower the robustness of Borg, which is already available as a plugin. Can we make both of them talk?

    Or maybe a "generic" USB plugin that runs any shell command when a specific USB device is plugged, and umounts it when the command is done, so anyone can use their preferred backup solution.

    Hi there,

    I want to replace the NVMe SSD in my NAS with a bigger one. I only have one NVMe slot in my motherboard.

    This NVMe is NOT running the OMV system, but:

    - has docker installed

    - hosts the swap file of the system

    - has a couple of shared folders

    My idea is to image the drive with a live distro, install the new drive, copy the image to the new drive, and lastly extend the partition.

    Would the new drive show up with the correct UUID and references or the OS would go crazy?

    If not, are there any other methods?

    ryecoaaron I've checked my machine and I have no files under /etc/initramfs-tools/conf.d/

    My real machine is a UEFI install, while my OMV VM I use for testing has such file, but has a Legacy BIOS.

    Another cause could be that I disabled hybernation on the real machine by running systemctl mask ?

    I don't know what resume file is used form, but it's missing.

    Posting here to provide an update.

    I ended up not disabling swap, but switching to swapfile on another drive.

    It's fairly easy to setup and does not require formatting or creating a partition.

    In this example, I created a 4GB swapfile. Use an SSD if you can, you can store the file anywhere on the disk, I did in the root.

    These steps are only for x86/64 systems

    • Check the current swap configuration by running swapon -s
    • Execute the following commands to create the swapfile, assign the right permissions and enable it to be a swapfile
    fallocate -l 4G /path/to/swapfile
    chmod 600 /path/to/swapfile
    mkswap /path/to/swapfile
    swapon /path/to/swapfile
    • Edit fstab with your favorite editor, like nano /etc/fstab
    • Comment out the existing swap partition
    • Add the new swap file like this:
    /path/to/swapfile none swap sw 0 0
    • Restart the OS
    • Check that the file has replaced the previous partition by executing swapon -s

    You need more ram if your system is slowing down that much because it is swapping. Good thing you didn't just disable swap.

    I have 8GB of RAM and jusing just less than half. The system usually swaps from about 500mb to 2GB.

    Yes, I will upgrade to 16GB, but the SWAP was drawing back a LOT of performance. I opened an issue on your GitHub repo to discuss this, because previously this plugin suggested the user to disable SWAP.

    Hello, posting to provide some feedback: it was the damn swap partition.

    OMV on my system is installed on a thumb drive, mainly for ease of backup and not to steal a SATA port.

    Swap by default is enabled and located on the thumb drive, causing awful system performance!

    As soon as I switched it off and moved to a swapfile on the SSD, the system never experiences high I/O wait and SMB is back to FULL proper speed.

    So yes, next time check the SWAP!

    Hi there,
    Back in the OMV5 days I recall that the flasmemory plugin advised to disable swap and that's what I did back then.

    OMV6 flasmemory plugin does not give any advice, but I realized I have a swap partition that is running off my USB stick, and I believe this is the cause of the slowdowns I'm facing.

    How do I permanently disable the swap partition?

    I already edited /etc/fstab to comment out the swap partition and executed swapoff /dev/sdXY with the correct partition. On the fly the swap partition is emptied and disabled, but after a reboot the swap is enabled again.

    There must be a salt config or some other default that is turning it on again
    , can you assist?

    I'm surprised that nobody has reported this before, but what's the point of using the flashmemory plugin if the swap partition is kept on by default on the same usb flash drive we're trying to preserve?

    I could re-enable swap by using swapfile on my NVMe drive used for apps and docker, but let's do one step at the time.


    I am experiencing slow transfer speeds over SMB when handling small files, so I began troubleshooting.

    I excluded SMB, and started benchmarking locally on the server.

    For this test, I am transferring a folder of 2.2GB, containing 148 files, photos and videos from my smartphone. These are not "so small" files, but vary from 2MB (photos) to 100MB (videos).

    I used rsync to show the current and total progress.

    I am using a NVMe drive to rule out any HDD weirdness and get the best scenario.

    Take a look at these runs. All the same, just deleted the copied folder and run them again.

    Only the 4° run took 2 seconds at the expected speed.

    During the slow copies, the speed would slow down after about 30%, and iowait skyrocket to 95% (measured via glances) with no load on the server.

    Taken during the second copy

    The same behaviour can be replicated on HDDs with MergerFS.

    As you can see by top, there's no load on the server.

    Looks like that writing is the only issue. If I copy the folder from the server to my client using SMB, I top up gigabit speed.

    The hardware specs are in my signature below.

    What should I do?