Beiträge von Adrian E

    Apologies - I'm only capable of following (very!) simple commands - what do I need to run in CLI to delete the file?


    If I look under 'locate' I can find both weekly and monthly entries for the scrub command, and also a daily check for errors (amongst 97 entries that refer to btrfs)

    I was getting a daily Anacron e-mail with something along the lines of the following:


    Code
    /etc/cron.daily/openmediavault-check_btrfs_errors:
    Performing an error check on Btrfs file systems.
    /etc/cron.daily/openmediavault-check_ssl_cert_expiry:
    Perform a check for expired SSL certificates.
    /etc/cron.daily/openmediavault-pending_config_changes:
    Checking for pending configuration changes.

    This morning it's much shorter:


    Code
    /etc/cron.weekly/openmediavault-scrub_btrfs:
    Performing a scrub on Btrfs file systems.

    To note this only appeared to start on 25 March, and has been daily since. Nothing prior to that, since Sept '22 when I had an SSL certificate expiry warning

    Hi all


    When I look at my syslog file, I keep seeing the following message:


    Code
    Mar 27 08:28:47 openmediavault systemd[1]: Configuration file /etc/systemd/system/docker.service.d/waitAllMounts.conf is marked world-writable. Please remove world writability permission bits. Proceeding anyway.
    
    Mar 27 12:02:06 openmediavault systemd[1]: Configuration file /etc/systemd/system/docker.service.d/waitAllMounts.conf is marked world-writable. Please remove world writability permission bits. Proceeding anyway.

    Is this something I should address, and if so how?


    Thanks


    Adrian

    Thanks geaves - that's all it was - I ran the conversion command and it asked me to run a check routine first, which I did, and it did some kind of node optimisation, then ran the 64 bit command which took about 10 minutes. Logged out of systemrescue and into OMV and it's allowed me to grow the file system now, so have a lovely 42% full file system, rather than 80+% full before, with 9.5TB of free space!

    Just to add that if I open GParted, the file system looks to have a different partition name of /dev/md127 so maybe it's just that that I need to change from md0?


    There's a red and orange triangle with an exclamation mark next to it, but nothing to suggest what it means

    I'm a bit stuck with this now - I've downloaded and booted into systemrescue, got the terminal up and running, but when I try and run the command to convert to 64 bit I just get a message saying open: no such file or directory while opening /dev/md0


    Doesn't make any odds whether I try and unmount or mount the file system - if I try and unmount dev/md0 it says no mount point specified, and if I try to mount it it says can't find in /etc/fstab


    All suggestions gratefully received!

    Did you setup the drive in terms of smart monitoring and caching etc before starting to move files onto the second drive?


    I’ve recently finished swapping out 4 drives and rebuilding RAID array after each drive install. The one time I forgot to do the drive setup before starting the rebuild the write speed to the drive was halved

    OK, slightly panicked now lol


    Code
    root@openmediavault:~# sudo umount /dev/md0
    umount: /dev/md0: not mounted.
    root@openmediavault:~# sudo resize2fs -b /dev/md0
    resize2fs 1.46.2 (28-Feb-2021)
    resize2fs: Device or resource busy while trying to open /dev/md0
    Couldn't find valid filesystem superblock.

    OK, well progress has been made :) RAID now shows 16.4TB and clean, so that's all good.


    Went to the file system and selected resize - get the following error message:


    What am I doing wrong?!

    Superb! Thank you - ran mdadm --grow /dev/md0 --size=max and output says

    mdadm: component size of /dev/md0 has been set to 5860391424K


    In GUI it now says it's 'active, resyncing' and looks like it'll take 10 hours or so to finish (and there was me thinking I'd done with all the hanging around lol). Looking forward to the 16.4TB capacity!


    Presume the file system expansion should work fine from within the GUI just by using the resize option?

    Hi all


    So I've been busy the last few days swapping out each of my x4 3TB drives for x4 6TB drives. All done by manually failing one drive at a time in the RAID and then adding its replacement.


    I know I now need to grow the RAID before growing the file system, and it appears that needs to be done in CLI and not in the GUI. My query is what commands do I need to run in CLI to do this (and do I need to run anything beforehand to make sure I do this right?)


    I've got the same number of drives before and after, so is it simply mdadm --grow /dev/md0 --raid-devices=4 or a variation of that?


    Thanks


    Adrian

    Adrian E are you looking at this one I use zfs on my N54L I'm trying to work out from the specs/data for that Gen10 if this is doable or is the raid controller specific for software raid only

    Yes, that's the one I'm looking at. I'm still using Ext4 on my N54L so can't comment on the zfs query (it's beyond my knowledge, but seen quite a few threads about it so presume in OMV6 there are reasons for going that route, over ext4 if starting from a new build?)


    Hopefully Spy Alelo can comment on the techie query :)

    Thanks for that Spy Alelo - looking at the Gen10+ it appears there's a few different versions available here in the UK. I presume for a NAS solution then the 'lowest' spec one (Pentium G5420 processor) is more than sufficient, and not worth spending more on the Xeon processors? I note the basic one ships with 8GB of memory (which is all my N54L has anyway) and the more expensive ship with 16GB. Is it worth adding memory if OMV is all I'm running on it?


    I've noted those links to the card and NVMe SSD - definitely on my radar to go that route :)

    Hi Spy Alelo


    Can I pick your brains on options please? I have an N54L which I bought new in about 2013, and aside from adding RAM it's pretty much as it came out of the box (I did run OMV off a USB stick originally, but now using a 120GB SSD with OMV6). It's mainly used as a media server, but also to backup a desktop and couple of laptops. At times it's struggled with 4k content, but suspect that's as much a network issue as a data handling one! At the moment it's working fine.


    I've hit 85% capacity with my 4x3TB WD Reds in RAID5, so have begun the process of migrating to new 6TB WD Red Plus drives (3rd one currently rebuilding the RAID as I type) but what's got my attention now is that the hardware is clearly getting quite old and although no failures to date I do worry it might one day just go pop in a way that's not economically viable to repair. The server lives on a shelf high up in a cupboard so dust isn't much of an issue and temps of drives happily sit around 28-30C (82-86F) so really just wondering at what point I'm likely to be living on borrowed time (I suspect I already am!)?


    I saw your suggestion of the Gen 10 Plus above, and that's defo an option (although lack of an optical bay is a minus, as I was always tempted to fit one to the N54L for running Handbrake straight onto the storage - having not done it so far, it's defo not a deal breaker!). From a practicality point of view, if I swapped/cloned my OMV installation drive into a Gen 10 Plus is OMV likely to 'just work' and be a simple hardware swap, with maybe a repair of the install to deal with hardware differences, or will I need to treat it as a completely new install?


    Are there other generations of the micro servers still in production that are worth looking at as options?


    Many thanks


    Adrian

    Just to close this off, further assistance off the back of another user's issue highlighted my problems were all permissions related - link below to where the fix was detailed :)