Beiträge von Mr Smile

    Hi folks,


    I have set my SMART temperature monitoring to global 45°C max.

    Direct underneath it says: 'Report if the temperature is greater than or equal to N degrees Celsius.'

    (Per drive settings for Temperature monitoring are all left default at 'Use global settings'.)

    My mail notifications are also enabled and all boxes (including SMART) are ticked and almost daily I get mails for updates etc.


    During a longer period of stress my drives got pretty warm yesterday. When I checked their SMART temperatures manually in the web interface, all four were at 46°C and 47°C.

    But to my surprise I got no warnings in my inbox. Is this by design or is there something broken?


    Thanks for clarification. :)


    After finding this glitch I had a look into the SMART Log in the logging section of OMV.

    There are 1222 pages of entries but all dates are between 01.01.2023 and 31.12.2023.

    votdev I have no idea if temperature warnings should (not) come per mail but log entries from the future are definitely a bug. ;)


    edit:

    Ok, I checked some other Logs in log section and found that UPS logs go from 6.2.2023 to 12.12.2023.

    Other Logs (System, Boot, ...) have plausible entries from the last some days or are empty.

    My server has no system time problems and dashboard shows the correct CET time and an uptime of two months.

    I think the logging dates from the future must be a displaying issue.


    If you need help with tracking this down, tell me what to do! :)

    Hello folks,


    the question is in the title. I searched the forum but didn't find a clear answer.


    I have four large Toshiba HDDs in my system. Because they have a feature called 'persistent write cache', that is advertised as data loss protection in case of a sudden power loss, I had no concerns enabling write cache for them.


    But apart from these large drives I have two Samsung (SATA) SSDs. A relatively small one containing the OMV system itself and another one as place, where VMs and containers live.

    I'm asking myself if write cache is even still beneficial in terms of speed with todays fast SSDs. And is it still risky to use? As I understand my SSDs don't have such a feature like "persistent write cache" but maybe this isn't needed at all ...


    Can someone please tell me the exact pros and cons of enabling write cache for SSDs in OMV?


    Thanks for taking your time!

    This is wrong, in OMV6 this has been changed. See https://github.com/openmediava…ult/debian/changelog#L272

    Thanks for the link! I meant "consistency" in another way.


    Let me explain (and please excuse my clumsy english expression):


    On the "Storage / File Systems" overview page, I don't just see settings of devices I previously added but also current values of these devices (available space, a bar graph of space in use, current status, referenced or not, and maybe some other optional stuff).

    Settings, that can be changed are under "Edit".


    "Storage / RAID Management" overview page also displays the current state "clean".


    On the "Network / Interfaces" overview page there are "Edit" and "show details" submenus for every device. At this time the overview shows info from the "Edit" submenu but I think having the contents of "show details" here would be much more valuable to the user. And the space is there.

    For example displaying "MTU = 0" on that overview page without explanation is misleading. When you click "Edit" you see that "0" is just the value for "using the defaults". On the overview page one would expect to see the IPs, Masks, Gateways, MTUs that are actually in use and the current Link status of that device rather than "DHCP - - - - 0" and so on...


    edit: I like the idea of only showing filesystems, network interfaces and so on, that are actually configured by OMV. What about applying the same restriction to the network interfaces widgets? Both of them still list unconfigured and unused interfaces - the more detailed even my loopback device (with MTU 65536). ;)

    That's by intention. The configuration pages, e.g. network, only shows the configured data. The widgets are meant to display the current real data.

    Thanks for clarification, votdev . :)


    I have to say that I didn't expect this behavior to be intentional. In my opinion this configuration site should also show the received IP-addresses when method DHCP or Auto is set.

    Only having this in widgets section is not enough. Here is why:

    • Widgets are small. Even IPv4 addresses are cut off in them.
    • In my case I access my OMV GUI from different devices and on most oft them cookies are deleted on a regular basis. So I get an empty widget page whenever I log in.
    • Network/Interfaces section now contains some exclusive information (method, wol yes/no) but not all. This is counter intuitive. MTU for example is shown as "0" here (because not set, i guess) but widget says 1500. "Link" is just present in one of the widgets ... There should be one central place to show everything. And because of space (and cookie hassle) I'd prefer Network/Interfaces section.
    • Consistency: In storage section you are able to make changes but also see the current state.
    • The larger one of the two network widgets looks like a condensed mini Version of Network/Interfaces but in fact it is not. This is confusing.

    Hi,

    I might have found a bug in my OMV6 installation.

    Network connection itself (ipv4 and ipv6) works without any problems, but when I go to Network/Interfaces I just see - - - -:


    In contrast to that two of the dashboard widgets show correct data (although a bit squeezed):


    My hardware is a HP Microserver Gen8 with two ethernet ports. (both connected but second one exclusively assigned to HP iLO)

    OMV version: 6.0.25-2 (Shaitan)

    Kernel: Linux 5.16.0-0.bpo.4-amd64


    Any idea whats wrong and how to fix it?

    To be fair I run this configuration for months but didn't check the Network/Interface output till today because no problems.

    Installed some plugins in the meantime but don't know if this malfunction existed from the beginning.


    Thanks for your input! ;)

    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    Ok, I have to tell you in order what I did in my stupidity, to get to the root of this mystery.


    I used my test VM to install ALL plugins available and found another alphabetical inconsistency with USV being sorted in front of USB ...


    So I changed the Language setting from german to english to check, if USV (Unterbrechungsfreie StromVersorgung) gets to UPS (uninterruptible power supply). And what I got was that:


    Yes indeed, Rsync (english) is translated to rsync (german). :/ Makes totally sense to me. :D


    Thank you, ryecoaaron , for even answering my stupid question.


    Conclusion: The alphabetical order (of the english Plugin names) incl. sftp is absolutely fine. Was just confused by the german translations because I didn't expect a 'translation' from Rsync to rsync.

    This is caused by a package outside of the OMV project Here you get hints where to address your demand. Maybe you can speed up the fix in upstream saltstack.


    Thanks for your support.

    Thank you so much for pointing me to the right direction. I don't think I'm able to help but at least I understood where the problem comes from.


    Sorry to all for being so pushy but the advice to ignore repeated error messages felt so strange to me. (a bit like the advice of getting used to clicking away warning messages in windows on a regular basis :D ).

    Hello folks,


    is there some mechanism to order the menu entries of the Services menu? If so, it might be broken. I noticed that all entries in the list but sftp are in alphabetical order.



    "Locate" is also added by an omv-extras addon. So this exception has nothing to do with omv-extras addons being placed at the end of the list or so ...


    If the arrangement is supposed to be a product of chance, I apologize for my nitpicking. ^^:saint:

    Hi folks!


    In my hardware setup I have four drive bays populated with data harddisks and one ssd containing the os.

    While installing OMV I remove the data disks to prevent any loss of data. So at this point of time the system ssd is /dev/sda.

    After initial setup I install and configure some addons needed for later operation - also sharerootfs (6.0-3)!

    This is how the drives tab looks like at this point:


    Sharerootfs makes the system partition (/dev/sda1 at this point) showing up on the file systems tab. Everything is fine.

    Then I attach the data drives and the mess begins.


    For some reasons I cannot explain the data disks are always put in front and the system disk is shifted to the end. In this case with only one harddrive the system disk becomes /dev/sdb.

    As a result I get this error message in the file systems tab:


    At the Dashboad the system file system is shown as "missing":



    and also deisappears from the file system list:



    two years ago I already faced a similar problem in OMV5 (OMV remembers 'drive letters' during installation and creates ghost file systems afterwards), but never found the reason nor solution to that.


    Back then I had no idea that sharerootfs might be responsible for those "missing" ghost entries in the OMV5 file systems tab but I'm confident now that sharerootfs in combination with my system shifting the "driveletters" when attaching more drives is the problem.


    When I realized that I uninstalled and reinstalled sharerootfs and everything went back to normal!


    So votdev : I have no idea what sharerootfs does (and WHEN it does it) but it is obviously overwhelmed when a newly attached drive pushes the system drive back in the sequence.


    Reproducing this behavior in Virtualbox is a bit hard but I managed to do so.

    First install OMV on a single virtual hard drive attached to SATA port 0 (default).

    Then in OMV install sharerootfs and shut down the vm.

    Add another virtual hard drive to the vm and reorder them so that the system disk is on SATA port 1 and the new one is on SATA port 0.

    When you try to start the vm, you now get an error message (no bootable medium found).

    So press F12 immediately after starting the vm and type "2" to select the right virtual hard disk for startup.

    After that you'll see the exact same problem described.


    I'm not a dev and I didn't look into the details of sharerootfs but I fear its magic has to be repeated every time the drive order changes to prevent this bug. :/


    edit: On my server setup the system ssd is set to first place in bios boot order. But nevertheless OMV pushes it behind the data disks at system start. When I attached all 4 data drives, the system ssd ends up as being /dev/sde. And every time such a shift happens these errors have to be repaired with uninstalling/installing sharerootfs again

    Wait a day or so and try the update again or you can try forcing it by updating by hand in the shell.

    Deleted my second question from this (resolved) thread just before you answered. :saint:  :S

    But anyway thanks for your answer. I'm not in a hurry.

    Just wanted to know why this is happening in OMV6. In OMV5 I don't remember any held back updates.

    When I ran my first update on a recently installed OMV6.0-34 amd64 I noticed following error scrolling by in the update window:



    Already saw another thread reporting similar errors but not exactly the same. Unfortunately I neglected to look for the mentioned files immediately and after the next reboot they were gone from /tmp/. But should be easily reproducible on a test machine/vm.


    Is this something I should be worried about? :/

    Then OMV6 should only be released when Debian11 is out?

    Maybe it will be there soonTM

    I really hope for that!

    Please  votdev, use this V6 step to finally align with the Debian release scheme! I like OMV very much but this version chaos between OMV and its underlying base totally sucks. I know that for some reason you don't like somewhat fixed release cycles but aligning with Debian totally makes sense. Please let me explain my idea:


    Going align with debian takes pressure from you and the users.


    I bet most of OMV users would be fine with waiting for new features (and stop asking for release dates) if its clear that OMV beta is based on debian testing and won't be released until its base gets stable.

    But when Debian+OMV are finally released we have the perspective on having an up-to-date base combined with an up-to-date front end for roughly two years. When it comes to peoples' data, long-term-stability is more important for most of them than having the newest features available instantly.


    If you decide to release OMV6 based on Debian 10 (before Debian 11 getting stable) users have to stay with Debian 10 oldstable again until OMV7 is done. Debian 11 is expected to be stable in mid of 2021. In my impression such a more or less 2-year release schedule would fit the needs of most of us. So please consider my arguments when you have to decide for a direction.


    Aside from that thanks for this nice release. Runs good so far. :thumbup:

    Thanks. Unfortunately clearing the cache didn't help either. :(


    And regarding the option: manpage wipefs also says that --no-headings is the same as -n.
    Its the first time I deliberately notice such an ambiguity. So using -n gives me a 50% chance for both options? :huh::D


    But i found a real trace whats going on here under System Information / Report:



    ================================================================================
    = Static information about the file systems
    ================================================================================
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sdb1 during installation
    UUID=ea9995f6-9084-43b3-90d2-17bcb592ed50 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sdb5 during installation
    UUID=0353bed2-8aa8-4524-b817-b9173f3b55a0 none swap sw 0 0
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/ssd /srv/dev-disk-by-label-ssd ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/daten /srv/dev-disk-by-label-daten ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    # <<< [openmediavault]



    It says that the root file system was on /dev/sdb1 during installation and this is correct! When I installed omv5 I had only two drives connected to the system:
    my small data ssd (label 'ssd') as /dev/sda and the tiny tiny system ssd I installed OMV on (/dev/sdb).
    After setup I attached the RAID drives and they beacme /dev/sda - /dev/sdd. So the other drives were pushed to e and f.


    The reason for having the system drive on the least prominent letter is that it is an NVME ssd living on an expansion card.


    The question is: How can I safely achieve that OMV5 loses his memory? In the file systems tab there is a Delete button but I'm a bit concerned that deleting the phantom /dev/sdb1 could somehow break my real /dev/sdb.
    But when I boot up without RAID again the ghost /dev/sdb1 entry is not there. So what to do now?


    /edit: I detached my RAID drives because many people recommend this here. Does anything speak against reinstalling OMV with ALL 6 drives connected and to be very careful while choosing the right drive for root?
    This way OMV would remember the right 'drive letter' for root from the start.


    @votdev Doesn't matter how I solve this problem for me. This behavior is definitely a bug, that should be fixed somehow (maybe also in OMV4)!

    Ok you had post I was reading and it just disappeared :)


    But to clarify wipefs -n will nothing other than display the partition and file system information on that drive.


    Here is what wipefs --no-act /dev/sdb said:

    Code
    root@myserver:~# wipefs --no-act /dev/sdb
    DEVICE OFFSET TYPE              UUID                                 LABEL
    sdb    0x1000 linux_raid_member c22bdeb3-7fdd-f2eb-641b-3e42597f1d04 myserver:daten


    Output for /dev/sda, /dev/sdc and /dev/sdd looks EXACTLY the same (except for the device name)!


    So what to do? I still wasn't able to find any signs of a /dev/sdb1 partition/filesystem in terminal ... ?(

    @Mr Smile looks as if your last post was in a mod queue as it wasn't there when I replied.

    arrrg mod queue again. :cursing:
    Just wanted to add a correction ... 8|


    /edit: @'votdev' putting previously published posts to mod queue automatically because of adding some tiny changes is annoying for us users and also produces unnecessary work for the mods. Please think about disabling this automatism!

    @geaves Thanks for your reply!

    Nothing to worry about, it's assigned a new raid reference.

    I also don't think so but wanted to mention it, because its different to before


    It's the same on my test machine.

    Good to know while I still don't think displaying SWAP under File Systems makes sense because SWAP partition by definition doesn't contain an actual file system.


    That is odd and it must be a first :)
    There are two things you could do;


    1. ssh into omv and run wipefs -n /dev/sdb this will not wipe the drive but it will give you information on the file systems, it may be possible to remove what is causing it.


    2. Remove /dev/sdb from the array, wipe it as if it was new drive and re add it, all that can be done from the GUI.


    First I have to say that one reason for not deleting the RAID and starting over with empty disks was that it is to a large extent filled with 'medium' important data and I at this moment don't have capacity for a full backup. (my real important stuff is of cause properly backed up twice) Losing those terabytes of data would still be a shame but the reacquisition would probably cost the same 200€ that a hard disk in this size would cost. So lets say I'd like to minimize the risk of losing those 200 euros. ;)


    1. I had a look at the manpage of wipefs and the -n option seems to be ambiguous. 8|

    Zitat

    -n, --noheadings Do not print a header line.

    Zitat

    -n, --no-act Causes everything to be done except for the write() call.

    Just to be sure: You meant --no-act, right?


    Anyway, the risk of having any 'old' filesystem signatures on my second drive is near zero. I bought all four drives manufacturer sealed (last year or so) and didn't do any experiments before building the RAID5 array in OMV4 gui.



    2. I'd keep degrading the array as the last option to minimize the risk of dataloss.



    The interesting part is: I didn't manage to display the 'ghost partition' on terminal. So where does the File System Tab of OMV5 gets its data from?
    @votdev Could you (or someone else who knows the code) please point this out? Thanks a lot!

    Vielleicht hilft dir die Beantwortung dieser Fragenliste erstmal weiter und kann vielleicht schon mal etwas klären: Degraded or missing raid array questions

    @cabrio_leo Danke für den Link. Ich werde die Infos mal zusammensuchen.


    1) cat /proc/mdstat:

    Code
    root@myserver:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
          11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          [>....................]  check =  0.0% (1056320/3906887168) finish=369.7min speed=176053K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>


    2) blkid:

    Code
    root@myserver:~# blkid
    /dev/sda: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="7e3040a4-847d-74a5-f88f-5143f8c1e1ee" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/md127: LABEL="daten" UUID="93f6618c-6119-4c35-953b-67b0f175db1a" TYPE="ext4"
    /dev/sdb: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="ff046fc5-6b2d-56fb-7c80-d4a392c7fd24" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sdc: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="fd82acbc-3708-cd65-6cf9-86293f1838bb" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sdd: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="53b1fd9f-c3c6-ce77-294f-78ba5951a55e" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sde1: LABEL="ssd" UUID="923948de-d9a1-4262-b08a-b913fedd8e15" TYPE="ext4" PARTUUID="235f252d-fbb3-4102-9382-ea562d1c7e8b"
    /dev/sdf1: UUID="ea9995f6-9084-43b3-90d2-17bcb592ed50" TYPE="ext4" PARTUUID="1885578f-01"
    /dev/sdf5: UUID="0353bed2-8aa8-4524-b817-b9173f3b55a0" TYPE="swap" PARTUUID="1885578f-05"


    3) fdisk -l | grep "Disk "


    4) cat /etc/mdadm/mdadm.conf


    5) mdadm --detail --scan --verbose

    Code
    root@myserver:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/myserver:daten level=raid5 num-devices=4 metadata=1.2 name=myserver:daten UUID=c22bdeb3:7fddf2eb:641b3e42:597f1d04
       devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd


    6) Array details page from OMV5 RAID Management tab says:


    This looks all good to me but the 'ghost device' /deb/sdb1 is still present and marked as missing.
    Any ideas? :whistling:


    @votdev I don't have a Github account but this might be an omv gui bug. Could you please have a look at it? I'm willing to deliver more terminal output if needed.


    /edit: The only notable thing is that mdadm.conf seems to be a bit empty. But I have no idea if this is still the right place to look into. ( @ryecoaaron s posting was meant for OMV1.0.) Did I perhaps miss some important step in my RAID migration?