Posts by curto

    Guys,


    The alerting system sends out emails based on CPU usage (and i am sure a few others as well !!)


    I assume a non-native english speaker has written the message


    The alert is


    The system monitoring needs your attention.



    Host: \OMV-AFsydney.AFSYDNEY.LOCAL


    Date: Sat, 13 Mar 2021 08:35:24


    Service: \OMV-AFsydney.AFSYDNEY.LOCAL


    Event: Resource limit succeeded


    Description: loadavg (5min) check succeeded [current loadavg (5min) = 4.0]



    This triggered the monitoring system to: alert




    You have received this notification because you have enabled the system monitoring on this host.


    To change your notification preferences, please go to the 'System | Notification' or 'System | Monitoring' page in the web interface.




    You will note the Event is

    Event: Resource limit succeeded



    This should actually be "Resource limit exceeded' - not succeeded


    regards


    Craig

    Guys,


    OK coming from a CLI background and slowly coming to grips with OMV.


    Trying to get a Guest only share setup


    the same as this from the readme


    Public only: guest user always used. This is the Guest Only option in the samba share configuration

    guest ok = yes

    guest only = yes


    Notes:


    • The guest account is mapped to system account nobody, he doesn’t belong to group users, thus he HAS BY DEFAULT NO WRITE ACCESS just READ. This is can be reverted modifying the POSIX permissions of the share to 777.
    • These directives are NOT ACL
    • The semi public is valid for OMV version 1.10


    Having run SMB for ages i thought this would be easy.


    I have gone to the folder in the filesystem in question and done a chmod 777 for all files and directories


    I have then created a share





    I have set the debug level on SAMBA also







    I have then created the SMB share






    But i am still only getting Read only access to the share.


    Whats more - the debug setting does not seem to be outputting any more info into the logs (like none) than before


    I assume the logs are still in /var/logs/samba ?


    Any ideas how i can get to the bottom of this


    There will be upto 50 different PCs accessing this at any point throughout the day and they have different users using them at different times, so messing with the WIndows Credentials is not an option.


    regards


    Craig

    Yep you are going to have to isolate the issue before you go much further. If you have multiple ram modules remove as many as you can until you are left with a single - then do the same tests and see if you get the reboot - i usually look at ram as the culprit but it could be many things as listed by the mod above.


    I would put in a test bench somewhere with one of the many linux bootable usb sticks for stress testing components and see if you can narrow it down


    Craig

    Hi guys,


    Running OMV5, 5.6.0-1 on an AMD Ryzen platform.


    I have added the sharerootfs plugin and installed it - but nothing appears to happen (not sure what is meant to happen ?)


    I am trying to create a filesystem share of a mounted LVM drive set - really do not want to push all the existing folders down another level just to have a "shared folder" at the mount point level - i assumed this plugin would give me that ability - but not really sure at what point it is meant to pop its head up and give me options ?


    And documentation on using this one anywhere ?


    Craig

    Ok yes you can do it.


    1) Create a Standard partition and filesystem on the 8TB drive

    2) Copy all of your data across from the 4TB drive onto the 8TB drive

    3) Use Gdisk - delete the partitions on your 4TB Drives, create new partitions (slightly shorter than the total drive) make sure you mark them as fd00 (RAID type)

    4) from the CLI create a RAID0 of the 2 x 4TB drive - wait until this has finished creating, then create a new RAID 1 set using the new /dev/md0 device - use the -missing command when you create it

    5) Create filesystem, mount it etc and copy all your data across from the 8TB drive - now at this point you have no protection for your data if either 4TB drive fails

    6) Use Gdisk again and delete the partition on the 8TB drive, create a new partition again mark it as fd00

    7) from the CLI add this drive to the RAID1 set using mdadm

    8) pray nothing goes wrong with the 4TB drives while you wait for this to sync



    Craig

    Actually i was wrong - have just done it on a Centos 7 and it worked fine


    But it is going to be messy for a beginner and can only really be done if you have somewhere else to copy the data as you set this up.


    I personally would recommend maybe looking at UnionFS or MergeFS etc as a better solution - depending on what you plan on keeping on there


    Craig

    You can not do this through the GUI it is all CLI


    I have never tried it - will spin up a VM now and check - but 99% this will not work as the RAID0 can not be added.


    Will try it and advise back shortly


    Craig

    Here you go - screenshot of the RAID screen after all up and running


    6 x Drives in RAID5 = 4 x 6TB and 2 x 8TB


    Manually (CLI) setup the array after manually partitioning the drives with GDISK and SGDISK


    Create a single RAID array with 6 x 6TB partitions (the first partition on each drive - so /dev/sda1 /dev/sdb1 etc


    Then after waiting for that to finish building and adding a Filesystem - went into the CLI again and created a RAID1 with the remaining 2 x 2TB partitions on the 8TB drives.


    Install the proxmox kernel and remove the debian kernels from the omv-extras Kernel tab.

    Have you tested that works with these controllers ok ?? In the end i just added the fix line into my grub conf and regenerated the boot image and it is working fine - across updates etc as well.


    Eventually the kernel guys should catch up


    Craig

    I do not think this is the same problem - mine would not see any of the drives on the controller. If you are having video issues - then do an easy test - when the boot menu comes up on the screen - press the E KEY.


    look for the line that ends in quiet and on the end of that put nomodeset - this resolved a problem i had with video and the new Ryzen Vega graphics chips - check under my name for a post about this.


    Yours is however a much older machine so doubtful it is a a video issue - but this setting basically tells the kernel to leave the graphics alone with the BIOS.


    This setting is a temporary one for that boot only - you will need to edit your grub config to make it permanent - test it and see anyway


    craig

    You did see my answers and some of the pitfalls. ??


    If you are not knowledgeable (and comfortable) with CLI stuff with RAID and Linux disks in general then what you are proposing is a bad idea in the short term - but i did give you an approach to try if you wish.


    Remember everyone on here (mods) are all volunteers and "probably" see the same questions over and over again and are just trying to save you from potential pain further down the path.


    Craig

    Here you go - the RAID5 array creation completed after about 18 hours using the standard OMV way through the web gui


    1) Wipe each device

    2) Go into RAID arrays

    3) Choose to create new RAID 5 array and choose the 6 drives (4 X 6TB and 2 x 8TB) - knowing i would be loosing the 2TB at the end of each 8TB drive


    This completed after about 18 hours and appeared fine - survived a reboot etc.


    Then blew it all away through the web interface


    Went to the command line - made sure there were no superblocks or raid definitions in the mdadm file.


    Created the same size partitions on each of the 6 drives using gdisk (i always leave about 100000 spare blocks at the end as i was once burnt when i tried to mix and match vendor drives and found that 4TB does not mean the same thing to each vendor !)


    Created the array from the commandline using mdadm --create etc etc


    And this then shows up in the GUI - will be interesting to see what it shows at the end - especially when i add the 2nd partitions to the 8TB drives and put them into a RAID1 array !


    As one single array, and you'll lose the space on the 8TB, but you know that anyway :)


    8| you must have a lot of painting to do :D:D I've gone back to using zfs

    My understanding is that ZFS has no way to expand a drive pool/array ?


    In the past my procedure (prior to OMV) was as follows


    1) RAID6

    2) Get low on space

    3) Purchase a new economical drive as least as large as the largest in my array - usually a couple of TB bigger

    4) Shutdown box and add additional drive (after stress testing offline prior)

    5) Whils the box was running - partition new drive to appropriate size for the smallest member in the array

    6) Add to the array adn expand - WAIT A LONG TIME !

    7) Expand the LVM PV/VG/LV for the extra extents

    8) Expand the FS

    9) Add additional partition(s) on the spare space on the new drive - at least mirror with another partition, if not RAiD10 or RAID6 depending on how many other drives and partitions available and repeat 6,7,8 above - this let me get maximum space out of new drives and rotate out older ones


    I have followed this process on my media server since 1TB drives were in and have slowly worked my way through 2TB, 3TB, 4TB, 6TB and now 8TB drives - takes a bit of work but once documented for each of the steps is pretty much a no brainer and give me no more than a 1 reboot downtime to add the physical drive and a 2nd reboot at some point to retire older drives in the reverse process.


    These older drives then get moved across into my backup serve which is powered on once a day to backup all changed media files and any other critical machines (Vmware cluster) and then shutdown again.


    The one thing that has stopped me looking in the past at Freeenas was that it only did ZFS and that there is still no way to expand a pool (plus the memory requirements are crazy. As we are talking predominantly media files here i am not concerned about bit rot, anything that is critical is backed up to remote storage on an hourly/daily basis using Rsync - i can see that ultimately i will end moving to BTRFS - but it will be a while until i am happy its RAID6 implementation is right.


    Craig


    The issue with RAID 5 is a farily straight forward one as the drives get bigger.


    1) Usually people will buy a couple (at least) of drives at the same time - so they have about the same amount of wear and tear on them.

    2) lets say one fails, you are diligent, diagnose the issue the same day and get a replacement drive the same day and install it.

    3) There are a sequence of commands to go through to remove a failed drive and replace with a new one - easy to stuff up and get wrong on the way through - but ignore that as a problem and assume you get it right

    4) There is an incredible amount of stress on all of the remaining drives (some of which are either older or the same age as the failed drive) whilst rebuilding an array and adding a replacement drive - every drive has to be read for every block, the parity calculated and then written back amongst each of the drives, if you get a single error thrown during this process from any of the reads, calculates, writes then another drive will be marked as offline as your whole raid goes off the air - the more drives you have (and the larger they are) the more the chance that this happens

    5) The theoretical limit/perecentage for this happening passed for 8TB drives in about 2019 i believe for RAID 6 - RAID5 i believe was never recommended for the same reasons.

    6) Although i do not use it - i would recommend on reading up on Snapraid and Merge/UnionFs as maybe smarter ways for a media environment.


    Craig

    Yeah OMV uses the full block device, TBH I never tried partitions but creating the raid on the cli it should still display in OMV's raid management, but you are then committed to using the cli for raid management.

    Yeah its how i have always done it on my home systems anyway - it give me flexibility to rehome older drives from online production to nearline backup and get a few more years of work out of them


    WIll report back (in about 50 years when it finishes !!)


    ||

    Yep i know that - i previously tried it from the commandline and OVM did not appear happy as i use partitions rather than raw disks.


    the 8TB drives are new to the system and i wanted to make sure that doing it the "OMV" way woudl get them OK before i go back - blow it all away and do it with partitions so i can see how OMV handles it.


    Craig

    Yep - i am right in the middle of doing a RAID with 4 x 6TB and 2 x 8TB drives right now !


    Yep rebuilding one from a failed member is not a funtime !


    Thats why i did not tell him how to do it - if he wants to go and research he can and will hopefully learn enough along the way !


    Craig:)

    Yeah i know - but no worse than having no RAID as he intends to do - and gives him a path to start with a get to RAID in the future.


    Craig:P