Beiträge von auanasgheps

    For the records: the feature in my script is called "delayed scrub" but does exactly the same thing, because It is supposed to be run daily. I know it's a lazy, but it works.


    I wanted to help to get my script integrated in OMV, but I can barely find the time to maintain my own project. It's a busy year at work. Maybe towards the end of the year we could start working on this.


    I'll open an issue/discussion on GitHub to properly track it when I'm ready.

    With a USB stick you should disable the swap partition and eventually move it to a "swap file" on a SSD if you have it. That's what I've done.

    "Failed to suspend system. System resumed again: No space left on device" comes from, it refers to the missing swap partition or the space on the boot partition itself?

    It could be either a missing swap partition or a small swap partition.

    I don't use hybernation, just standby (suspend to ram)

    Hi all,


    From time to time I need to execute long jobs and I need need to disable the shutdown tasks that I configured in OMV via Power Management > Scheduled Tasks.

    Currently this step is manual and I want to automate it.


    I want to know if there's a proper way to interact with the OMV configuration.

    My goal is to know how to properly disable (and later enable) the jobs, save the configuration, all with shell commands.


    I've found the command to apply the changes herebut I don't know how to correctly make them.

    These commands do not return anything except the last one:

    Code
    root@nas:~# sudo grep '0 0' /etc/crontab /etc/cron.d/* /var/spool/cron/crontabs/*
    /etc/cron.d/openmediavault-powermngmt:40 0 * * 1,2,3,4,5 root systemctl poweroff >/dev/null 2>&1

    But it's the job that turns off my nas at 00:40 every working day.

    Do you have any rsync jobs or scheduled jobs?

    The disk that it's being waken up at midnight it's my SnapRAID pairty disk that is ONLY USED by SnapRAID, which job starts during the day and not at midnight. There's nothing except SnapRAID and OMV (that mounts the partition of the drive) that are aware or use such disk.

    Hi all,


    I noticed that every day at midnight my HDDs are being woken up, and this is being done by OMV itself.


    I'm 100% sure there's no Docker App or other service which is waking up drives.


    My current setup is:

    - USB Thumb drive where OMV6 is installed

    - SSD NVMe for Docker Apps

    - 1 HDD for Data, 1 HDD for Parity (SnapRAID)


    My Data HDD is always running, my Parity Drive is in spindown because is only used by SnapRAID once a day. It's the drive being woken up at 00:00 every day.


    I know OMV performs some cleanup activities at midnight, like logrotate or SMB recycle bin clenup, but these activities do not involve my Parity drive.


    Can somebody share some light on this behaviour? In my opinion it needs to be addressed.

    If the scripts are that different, it should be possible for the plugin to config things for either script. We could have a checkbox to select which script you want.

    Honestly, with all due respect to the creator(s) of the original script, I believe there's no reason in keeping that one. Mine has all the features and beyond.
    If the user does not care about advanced features, they can be ignored.

    Please somebody correct me if I'm wrong.

    Before this goes to github (where it will disappear for most forum users), what are you proposing? As noted earlier, the plugin already allows the customization of variables where, afterward (and saved), a Diff script is generated. With your version of a generated Diff, where the plugin is concerned, what would be different?

    Ideally we would expose all of my script features in the Plugin GUI. It has many more features compared to the existing script.

    I'm not asking you to make the changes to the plugin. I just need to know how the plugin should interact with the script. I don't use snapraid or the script. So, it would take a lot of effort on my part to figure it out. Just not something I have time to do. But if you could tell me what the form would look like and what buttons you would want to perform actions, I could code that in short order. This is how the diff script stuff was added.


    And if you make changes to the script that the plugin needs to change for, I would need you to let me know. Or maybe once the script was implemented in the plugin, it would be easier for you to tell what needed to change.

    Allright let's do this. I'll open a discussion/issue on GitHub so we can discuss it there.


    The plugin is fine. By automation I was referring to the diff script. I'm not sure there's any enhancement that would make me trust it. If there are changes to the array, I want to know what they are before syncing.

    I would recommend taking a look at borg as a proper backup solution. It is super, super reliable. It has versioning, encryption, compression, deduplication.

    It takes care of my off-site backups to a NAS at my parent's house. Whatever happens on my local SnapRAID array I have a weekly backup, along with a history/versioning of the last 3 months.


    It virtually runs on anything: for this I'm using a 10 years old Synology NAS and a shucked drive I paid very little for.


    The cost is small, you can automate it using borgmatic and then you'll sleep well!

    Just that no one wants to help integrate that with plugin.

    I am the maintainer of the snapraid-aio-script and I am humbled by the amount of people who use and recommend it.


    I would love my script to be bundled in the plugin, because everyone would benefit from it, but I don't know how to accomplish this. I am not really a coder and learned shell for this specific project and for fun.


    The project has also evolved a lot from a "simple diff script". If we would integrate it in OMV, we would have to expose every single option via the GUI, for example Docker containers to be paused/stopped, custom commands before/end, notification services and so on.


    Also, don't forget new features. They don't come often, but I try.


    This is beyond my abilities.


    I already do my best to make this script as plug and play as possible for OMV users, since I'm a passionate OMV user as well: auto install dependencies, reuse mailx config from OMV, documentation for OMV users.


    I'm not sure there's any enhancement that would make me trust it. If there are changes to the array, I want to know what they are before syncing.

    Have you tried my script? Has many logics to prevent unwanted syncs.

    I'm seeing the same issue on a Windows 11 client.

    My NVMe SSD is EXT4, and I configured new HDD disks with BTRFS.


    Samba is crashing when trying to copy from SSD to HDD shares.

    OMV6. Oh dear.


    I'll try the workaround mentioned above.


    EDIT: The workaround works, but please note that this configuration will be overriden by OMV, either by an update or next time you apply configuration changes. /etc/samba/smb.conf is one of the files that OMV controls and we are not supposed to make manual changes.


    I know how to add custom SaltStack states, in fact I have a few, but I do not know to properly override a OMV config. Waiting for mods to comment.

    Thats what my array is called as well :) First: Thank you for the script! It's really awesome.

    I've tried to use the dev branch but it still wont work:

    Code
    ## Preprocessing
    SnapRAID is not running, proceeding.
    SnapRAID configuration file not found. The script cannot be run! Please check your settings, because the specified file /etc/snapraid.conf does not exist.

    What I see is that the folder should be /etc/snapraid/... and not just /etc?

    Thank you for the kind words :)


    If you open the script config file, you will find this new section:

    Bash
    # SnapRAID configuration file location. The default path works on most
    # installations, including OMV6.
    # If you're using OMV7, you must manually specify your config file, which is
    # located in /etc/snapraid/
    # SNAPRAID_CONF="/etc/snapraid/snapraid.conf"
    SNAPRAID_CONF="/etc/snapraid.conf"


    We are working on a solution that will automatically pick the SnapRAID config file if there's only one, but it's not ready yet. In the meantime, you have to make this configuration.