Posts by jata1

    geaves - thanks for the input. I have never had this issue before and I have multiple rpis and other SBC running OMV for the last 4+ years. So it was a very strange issue that I was spending too much time trying to resolve.


    So I bought a new SD card (<$10) and setup as a new clean install. Took about 2hrs in total. Now working perfectly and issue resolved.

    Pivo0815 - so it looks like i'm not the only one with this issue. Unfortunately, I do not have a solution (yet) as this issue has happened again for me.


    I took the option to buy a new SD card and have created a new/clean install of OMV using latest Raspbian lite image. Just finished the install and configuration so will monitor over the next few days and let you know if this works. If it doesn't nothing will :)


    mi-hol - Thanks. I have already tried all of the suggestions on your link and not have solved the issue. Hopefully the nuclear reset/install will fix it.

    Hi all,


    I have a rpi3 with omv 5. Clean/new install a few months ago on a almost new SD card. Everything working fine.


    A few weeks ago, I installed all packages that needed updating through the GUI. Next day they were all back and listed as updates. I updated again and noticed the installed versions and update versions were the same versions - so just updating over the current packages.


    Rebooted and did the update from the cli and rebooted after but the issue persists.


    Thought it might be a bad SD card but it is a good quality card and I am not seeing any error/issues with the unit.


    Any thoughts on how to resolve this or should I just take the hit and try a new SD card?

    Hi all,


    Happy new Year. Haven't been on the forum for ages as my OMV / SBC setup has been working so well.


    Sorry for a slight cross post with the thread I started below - but I have done more research and I think I am asking wrong question(s). So I hope to get some more general advice through this thread...


    is ZFS a good choice for simple 2 disk setup


    My goal is add totally new data / disks on my OMV5 rock64 with good balance of redundancy vs. simplicity vs. data integrity (bitrot).


    I currently have 2TB storage across 2 x 1TB drives configured as btrfs. It works ok but i'm not that happy with it and the monthly scrub job is failing.


    I have new 2x 4TB drives and 4TB is enough storage so I was planning to:


    1. use one new 4TB drive in ext4 format as main data drive

    2. use rsync to do daily backup of main drive to the other 4TB drive - so a separate backup drive

    3. rsync backup drive to another NAS server weekly


    My main question with this is how do I handle data integrity / bitrot in this setup?


    I was thinking that if I make the backup drive btrfs then I can scrub / check integrity.


    Any help appreciated.

    Hi all,


    I have been using a single btrfs drive using 2 disks on my OMV 5 rock64 for a few years now. I need more storage and have decided to setup from scratch.


    Note that I also have a RPI3 with 2 disks setup as btrfs OMV4 that I use as a separate backup from my main NAS - rock64.


    I have read that ZFS might be a good option and I have had a few issues with btrfs.


    I have 2x new 4TB USB 3.0 drives coming and I also have powered USB 3.0 hubs for both NAS.


    My plan is:


    1. setup the 2x new drives on my rock64 and create a single ZFS drive/partition/pool (whatever)

    2. copy data from my existing btrfs drive (on main NAS or backup NAS)

    3. once all data restored remove the btrfs drive and disks

    4. move old brtfs disks to RPI and setup as ZFS (is this possible or advisable on OMV4)


    Does anyone have any views/suggestions on this approach?


    Should I setup the RPI3 as OMV5 during this transition or is OMV4 fine as a simple backup server?


    Thanks in advance for any thoughts and assistance.


    Edit - sorry I forgot to say that i am currently using 2TB of storage so the 4TB will be loads of capacity for me so it might be an option to mirror the 2 drives for redundancy.

    do all of the commands work? i.e. have you tried each command in a terminal/command/shell window?


    I suggest the following:


    1. check each command works in a terminal window


    2. Once you have all commands working then you can create a shell script (filename.sh) and execute it with the command sh /path/filename.sh from the terminal. Again test that it is working.


    3. final step is to create a scheduled job in the OMV GUI that runs the script - i.e. code to execute = sh /path/filename.sh

    A further update on my investigation is that the OMV cron.d jobs from the rsync GUI/page are working fine.


    It seems that the scheduled jobs in the GUI/page are not running - but they are enabled.


    Can anyone verify that scheduled jobs are working fine with latest OMV 5 (5.5.8-1) linux version 5.7 on their system?

    Hi all - yesterday I updated to linux kernel 5.7 on my rock64 with OMV5. Everything fine but I noticed that my cron job/script that checks and updates my Dynamic DNS config was not running - I have it set to log to the syslog when it runs as part of the script. The job/script is scheduled through the OMV GUI as a scheduled job.


    If I open/run the job manually through the OMV GUI it works fine. I tried changing the frequency and saving/applying but this hasn't helped.


    Any ideas or things I can look into to troubleshoot?

    I don't use email notifications but you might need to have OMV notifications enabled and working in the GUI config before this will work as OMV will need to send the email via a mail server.

    Understood. I have no idea what else could be triggering ambian-ramlog but something is and, therefore, the issue will persist for armbian builds.


    I'm just a noob with enough knowledge to be dangerous but happy to help if I can. Sorry if I am not being helpful.

    I already had this cron file commented out but it still ran armbian-ramlog


    The only thing that I have found to stop it is to change the enabled flag in the default files below...


    changed the enabled flag to false in etc/default/armbian-ramlog

    changed the enabled flag to false in etc/default/armbian-zram-config