Posts by Mr Smile

    Then OMV6 should only be released when Debian11 is out?

    Maybe it will be there soonTM

    I really hope for that!

    Please  votdev, use this V6 step to finally align with the Debian release scheme! I like OMV very much but this version chaos between OMV and its underlying base totally sucks. I know that for some reason you don't like somewhat fixed release cycles but aligning with Debian totally makes sense. Please let me explain my idea:


    Going align with debian takes pressure from you and the users.


    I bet most of OMV users would be fine with waiting for new features (and stop asking for release dates) if its clear that OMV beta is based on debian testing and won't be released until its base gets stable.

    But when Debian+OMV are finally released we have the perspective on having an up-to-date base combined with an up-to-date front end for roughly two years. When it comes to peoples' data, long-term-stability is more important for most of them than having the newest features available instantly.


    If you decide to release OMV6 based on Debian 10 (before Debian 11 getting stable) users have to stay with Debian 10 oldstable again until OMV7 is done. Debian 11 is expected to be stable in mid of 2021. In my impression such a more or less 2-year release schedule would fit the needs of most of us. So please consider my arguments when you have to decide for a direction.


    Aside from that thanks for this nice release. Runs good so far. :thumbup:

    Thanks. Unfortunately clearing the cache didn't help either. :(


    And regarding the option: manpage wipefs also says that --no-headings is the same as -n.
    Its the first time I deliberately notice such an ambiguity. So using -n gives me a 50% chance for both options? :huh::D


    But i found a real trace whats going on here under System Information / Report:



    ================================================================================
    = Static information about the file systems
    ================================================================================
    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sdb1 during installation
    UUID=ea9995f6-9084-43b3-90d2-17bcb592ed50 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sdb5 during installation
    UUID=0353bed2-8aa8-4524-b817-b9173f3b55a0 none swap sw 0 0
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/ssd /srv/dev-disk-by-label-ssd ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/daten /srv/dev-disk-by-label-daten ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    # <<< [openmediavault]



    It says that the root file system was on /dev/sdb1 during installation and this is correct! When I installed omv5 I had only two drives connected to the system:
    my small data ssd (label 'ssd') as /dev/sda and the tiny tiny system ssd I installed OMV on (/dev/sdb).
    After setup I attached the RAID drives and they beacme /dev/sda - /dev/sdd. So the other drives were pushed to e and f.


    The reason for having the system drive on the least prominent letter is that it is an NVME ssd living on an expansion card.


    The question is: How can I safely achieve that OMV5 loses his memory? In the file systems tab there is a Delete button but I'm a bit concerned that deleting the phantom /dev/sdb1 could somehow break my real /dev/sdb.
    But when I boot up without RAID again the ghost /dev/sdb1 entry is not there. So what to do now?


    /edit: I detached my RAID drives because many people recommend this here. Does anything speak against reinstalling OMV with ALL 6 drives connected and to be very careful while choosing the right drive for root?
    This way OMV would remember the right 'drive letter' for root from the start.


    @votdev Doesn't matter how I solve this problem for me. This behavior is definitely a bug, that should be fixed somehow (maybe also in OMV4)!

    Ok you had post I was reading and it just disappeared :)


    But to clarify wipefs -n will nothing other than display the partition and file system information on that drive.


    Here is what wipefs --no-act /dev/sdb said:

    Code
    root@myserver:~# wipefs --no-act /dev/sdb
    DEVICE OFFSET TYPE UUID LABEL
    sdb 0x1000 linux_raid_member c22bdeb3-7fdd-f2eb-641b-3e42597f1d04 myserver:daten


    Output for /dev/sda, /dev/sdc and /dev/sdd looks EXACTLY the same (except for the device name)!


    So what to do? I still wasn't able to find any signs of a /dev/sdb1 partition/filesystem in terminal ... ?(

    @Mr Smile looks as if your last post was in a mod queue as it wasn't there when I replied.

    arrrg mod queue again. :cursing:
    Just wanted to add a correction ... 8|


    /edit: @'votdev' putting previously published posts to mod queue automatically because of adding some tiny changes is annoying for us users and also produces unnecessary work for the mods. Please think about disabling this automatism!

    @geaves Thanks for your reply!

    Nothing to worry about, it's assigned a new raid reference.

    I also don't think so but wanted to mention it, because its different to before


    It's the same on my test machine.

    Good to know while I still don't think displaying SWAP under File Systems makes sense because SWAP partition by definition doesn't contain an actual file system.


    That is odd and it must be a first :)
    There are two things you could do;


    1. ssh into omv and run wipefs -n /dev/sdb this will not wipe the drive but it will give you information on the file systems, it may be possible to remove what is causing it.


    2. Remove /dev/sdb from the array, wipe it as if it was new drive and re add it, all that can be done from the GUI.


    First I have to say that one reason for not deleting the RAID and starting over with empty disks was that it is to a large extent filled with 'medium' important data and I at this moment don't have capacity for a full backup. (my real important stuff is of cause properly backed up twice) Losing those terabytes of data would still be a shame but the reacquisition would probably cost the same 200€ that a hard disk in this size would cost. So lets say I'd like to minimize the risk of losing those 200 euros. ;-)


    1. I had a look at the manpage of wipefs and the -n option seems to be ambiguous. 8|

    Quote

    -n, --noheadings Do not print a header line.

    Quote

    -n, --no-act Causes everything to be done except for the write() call.

    Just to be sure: You meant --no-act, right?


    Anyway, the risk of having any 'old' filesystem signatures on my second drive is near zero. I bought all four drives manufacturer sealed (last year or so) and didn't do any experiments before building the RAID5 array in OMV4 gui.



    2. I'd keep degrading the array as the last option to minimize the risk of dataloss.



    The interesting part is: I didn't manage to display the 'ghost partition' on terminal. So where does the File System Tab of OMV5 gets its data from?
    @votdev Could you (or someone else who knows the code) please point this out? Thanks a lot!

    Vielleicht hilft dir die Beantwortung dieser Fragenliste erstmal weiter und kann vielleicht schon mal etwas klären: Degraded or missing raid array questions

    @cabrio_leo Danke für den Link. Ich werde die Infos mal zusammensuchen.


    1) cat /proc/mdstat:

    Code
    root@myserver:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
    11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    [>....................] check = 0.0% (1056320/3906887168) finish=369.7min speed=176053K/sec
    bitmap: 0/30 pages [0KB], 65536KB chunk
    unused devices: <none>


    2) blkid:

    Code
    root@myserver:~# blkid
    /dev/sda: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="7e3040a4-847d-74a5-f88f-5143f8c1e1ee" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/md127: LABEL="daten" UUID="93f6618c-6119-4c35-953b-67b0f175db1a" TYPE="ext4"
    /dev/sdb: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="ff046fc5-6b2d-56fb-7c80-d4a392c7fd24" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sdc: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="fd82acbc-3708-cd65-6cf9-86293f1838bb" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sdd: UUID="c22bdeb3-7fdd-f2eb-641b-3e42597f1d04" UUID_SUB="53b1fd9f-c3c6-ce77-294f-78ba5951a55e" LABEL="myserver:daten" TYPE="linux_raid_member"
    /dev/sde1: LABEL="ssd" UUID="923948de-d9a1-4262-b08a-b913fedd8e15" TYPE="ext4" PARTUUID="235f252d-fbb3-4102-9382-ea562d1c7e8b"
    /dev/sdf1: UUID="ea9995f6-9084-43b3-90d2-17bcb592ed50" TYPE="ext4" PARTUUID="1885578f-01"
    /dev/sdf5: UUID="0353bed2-8aa8-4524-b817-b9173f3b55a0" TYPE="swap" PARTUUID="1885578f-05"


    3) fdisk -l | grep "Disk "


    4) cat /etc/mdadm/mdadm.conf


    5) mdadm --detail --scan --verbose

    Code
    root@myserver:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/myserver:daten level=raid5 num-devices=4 metadata=1.2 name=myserver:daten UUID=c22bdeb3:7fddf2eb:641b3e42:597f1d04
    devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd


    6) Array details page from OMV5 RAID Management tab says:


    This looks all good to me but the 'ghost device' /deb/sdb1 is still present and marked as missing.
    Any ideas? :whistling:


    @votdev I don't have a Github account but this might be an omv gui bug. Could you please have a look at it? I'm willing to deliver more terminal output if needed.


    /edit: The only notable thing is that mdadm.conf seems to be a bit empty. But I have no idea if this is still the right place to look into. ( @ryecoaaron s posting was meant for OMV1.0.) Did I perhaps miss some important step in my RAID migration?

    Hi together,


    yesterday I did the step of migrating my homeserver from OMV4 to OMV5.


    I wanted to start over with a clean system. So I decided to wipe the system disk. First shut down the old system, then detached all drives but the system ssd and went through the setup process.
    After setup and installing some addons I reattached my drives.


    To my relief my old 4 disk Raid5 (dev/md127) and my second ssd (/dev/sde) were recognized immediately. So I mounted them and reconfigured my shares and now everything works as before.


    But one thing makes me feel very uncomfortable. It seems like one of my Raid HDDs is also appearing as Missing File System /dev/sdb1.



    I have no idea what went wrong here. In fact there shouldn't be any partitions on /dev/sdb because with my old omv4 I created a Raid over /dev/sda - /dev/sdd without any partitions.
    Running fdisk -l also doesn't show any partitions on /dev/sdb.



    Here you have my current OMV5 File Systems view
    filesystems_omv5.png



    and here for comparison an old screenshot from my previous omv4 system
    filesystems_omv4.png



    As you can see the 'device' of my Raid5 also changed from /dev/md0 to /dev/md127 (and my SWAP partition is listed under File Systems now - why?) but that shouldn't be the trigger for this strange behavior.
    I also tried what happens when I start OMV5 with detached Raid5. Without Raid drives the mysterious /dev/sdb1 entry also vanishes.



    Has anyone an idea whats the problem here?


    Thanks for your help!


    /edit: Oh, sorry. I must have just been blind. :thumbup:
    Could some mod please move this thread to the RAID section? Thanks!

    @votdev


    [...]
    I deliberately don't ask for a date. Please just give me your estimation if it's a question of weeks, months, quarters (or years :D ).
    [...]

    I started this thread with that question on July 12th, 2019. (exactly 7 months ago 8| )


    I promise to never make such jokes again. :D


    Would any mod please close this thing? Thanks!

    I think OMV5 will be released after Debian 10.1. From my side all features are done in OMV5.

    Debian 10.2 is out now. Just wanted to ask if plans have changed. Somewhere else in the forum I've read about planned features and GUI changes for OMV6. Talking about v6 while all people are waiting for v5 sounds a bit strange to me. Is v5 maybe left out completely? At least it may have a rather short release cycle as according to one blog post from August the ETA of OMV6 is 2020.


    Please don't get it wrong! I'm thankful for having OMV at all and I understand why fixed release dates are problematic but I have to say that from my user perspective I'm very uncomfortable with this current situation. In a perfect world new OMV versions should somehow align with new debian releases. This would give us users a perspective of (2 year) long term stability and give you ( votdev , @ryecoaaron) more air to breath between two releases.


    For me long term stability (while at the same time using an actively supported version) is more important than having the most bleeding edge features available. In this sense I'm a bit shocked to see so many users asking questions in forum about completely outdated versions.


    Having a somehow "fixed" 2 year release cycle like debian (and routine upgrade paths) could also train users to upgrade more often. I admit I would also prefer not to touch a system that is running and doing its job for years. Having to care for a dist upgrade every two years while being up-to-date in between seems to me like a good tradeoff.

    Hello together,
    as OMV5 is just around the corner, I'd like to do some testing. On OMV4 borgbackup is the most important plugin for me. So setting up an OMV5 test machine would not make sense if borgbackup is missing. Could any OMV5 user please check if there is a borgbackup plugin available in omv-extras tab?


    Thanks!

    OMV5 has NOT been released, or did you see an official announcement of the homepage? Sourceforge simply shows the latest uploaded file, that's all.

    @votdev
    I know but on the other hand I understand why people are confused. No offence (as I really like your work), but you have to improve your communication. :)


    On https://www.openmediavault.org there is absolutely no info about what version(s) are up to date. On the news page there is a OMV 6 teaser and some 4.1 news. Aside from that NOTHING ... no info. People who are used to the debian 'testing - stable - oldstable' scheme must think that OMV 5 is the most recent production version.


    When you then visit the official download page (https://sourceforge.net/projects/openmediavault/files/) OMV 5.0.5 is the newest version that is not marked as beta. I understand why you want to wait for the release of Debian 10.1 but I really don't understand why you didn't append 'beta' to 5.0.5 when OMV5 is not officially released yet.


    The result is - as you can see - that the 'unreleased' 5.0.5 has way more downloads than 4.1.22. The only place where OMV 5 is clearly marked as 'under development' is the Forum.


    I'm not sure whether this confusion is even intended to attract more testers. In any case, that's what happens right now.

    In another thread Volker estimated that OMV 5.0 will be released after Debian 10.1.
    So a release within the next two months seems to be realistic.


    @ryecoaaron I wanted to ask if there are any mayor incompatibilities coming to us.
    Will all the plugins be available again shortly after the release of Usul?
    I'm asking because some of us heavily rely on those plugins. So in fact a transition from Arrakis to Usul can only happen when the plugins are ready for use again.
    Did you already test them and what are the results? Or do you need help to test or to make some plugins work again?
    Please tell us the state. :)

    @votdev I know this is an extremely unpopular question, but can you estimate roughly when OMV 5.0 Usul will be ready for release?


    Throughout the next weeks I planned to put a new server into operation and would like to know if it is worth waiting for the new version. I just don't want to take the trouble again to set everything up twice in a row.
    That's exactly what happened to me when Arrakis was released. The transition time between Arrakis Release and Erasmus EOL had been 2 months only, if I remember correctly.


    I can see that there is an Usul beta on Sourceforge already and installing a Debian 9 right now feels a bit wrong to me too.


    I deliberately don't ask for a date. Please just give me your estimation if it's a question of weeks, months, quarters (or years :D ).
    And how long will the transition phase be this time?


    Thanks for your precious time and please don't feel rushed! :saint:

    Ok, I lost the battle against multi citation ... 8|

    I didn't know bintray.com and didn't recognize that there are links underneath each plugin on the website. 8)


    I'm not an repository expert but I thought that it makes sense that package versions in 'testing' should be at least as new as in 'stable' because - you know - "first ship it to the testers and if nobody complains copy it to 'stable'.
    openmediavault-backup for example is version 4.0.6 in 'stable' and 4.0.1 in 'testing'. Why would anyone want to test an old version when a newer one is in stable? :D But I agree. For small changes this two step procedure might be overkill - especially when done manually.


    So 'testing' here means just for development of new plugins and once a plugin made it to 'main' all further changes are pushed directly to the users?

    What website?

    Sorry, I meant:
    http://omv-extras.org/joomla/i…hp/omv-plugins-4/4-stable
    and
    http://omv-extras.org/joomla/i…p/omv-plugins-4/4-testing


    According to these websites borgbackup is still in testing but not in stable. At the same time the website says there are some 'old' versions in testing and newer ones in stable. (Didn't compare the testing website to real testing repo as I don't have this activated.) No offense, I'm just curious: Why don't you remove the outdated versions from testing? In my experience from other projects testing repos are mostly used for unstable bleeding edge stuff. Seems to be the other way around here. :D
    (perhaps I'm not the only one who doesn't expect this)

    I don't have a good reason. I just put it in the regular repo.

    Thanks a lot man! :thumbup: It popped up in my servers backup section. Will try it in the evening.


    Aside from that the website looks a bit outdated. borgbackup still shows up in testing and there are some 'old' versions listed while newer releases are in stable already. Are those website entries auto generated or manually?

    Hello together, hello @ryecoaaron,


    today I visited omv-extras.org and stumbled upon a borgbackup plugin. I know borg backup and I really like its principles and how it handles things but I've ever been too lazy to learn its terminal foo. So I'd like to use this plugin.


    My question is now: This plugin seems to exist for some time now and there are no open bug reports on github. Whats the reason for it being in testing repo only? of course I'd like to have a safe backup solution for my data and not some beta stuff. ^^