Posts by SerErris

    Everything is okay with your raid. However the array believes, that there shall be a spare drive (which you do not have).


    So that means that it complains about it.


    I am pretty sure I posted a thread about it a looong time ago, as OMV created the raids always with a spare drive in mind - and you allways got this message...



    Check this post ...


    Howto remove the "SparesMissing event"

    If you believe the drive is still good, you can overwrite the first blocks of that drive and readd to the raid.


    It is currently in failed mode and still has the raid header on it.


    So you can run dd if=/dev/zero of=/dev/sde count=10 bs=1M


    That will overwrite the first 10M of your drive with zeros.


    After that you can rejoin the disk to your array md127 and the resync will start.


    Hope that helps.

    You can do it with the failed drive in place as mdm will mark it as failed and start in degraded mode.


    However if you know which drive (phyiscally) it is - you can just pull it and start as well in degraded mode.


    Also please be careful, as there is now no protection anymore. Any other failure will lead to a complete dataloss.


    You have a good backup, right?

    Here is the indicator from dmesg:


    You should consider using raid to get redundancy in your setup. ... and good backup/recovery is also key. It does sound like you do not have any backup in place for those files.


    If the files can be recreated that is okay, but if you required the files, or do not want to run through the hazzle of recovering them in some other way, you should consider a proper backup with a proper cycle (maybe daily).


    Cheers

    Can you please create some captures from the commands you run and the output and paste it here?


    Output in textform (not images) would be best.


    It is a little bit difficult for me to understand what you actually try in this last step:


    But i know the old one is still there because i see that 20 gb is used of the drive. But i cant access the drive from prompt, just says Access denied.


    If you cannot access - how do you see, that 20GB are used?


    If you try to access, what user do you try?


    So giving us more information will enable us to tell you esp. what you can do.


    Regarding that sentence:
    But when i configurated everything again i discovered that it created a new File system directory instead of using the old one


    Do you mean you have created a new storage filesystem, or do you mean you have created a new root filesystem?


    Can you send us the output of following commands:


    1. fdisk -l
    2. mount
    3. cat /etc/fstab
    4. a screenshot of OMV guy showing the storage configuration (disks, partitions, filesystem, storage configuration)


    Thanks
    Ser

    I have no knowledge in proxmox, so this answer is pulling from my generic knowledge of IT and how things work.


    Most likely proxmox is not transporting SMART through the virtualization layer. If that is the case you will never be able to use Smart to ask the drive for that.


    The only workaround would be that the proxmox layer is doing this functionality and sending out the mail.


    You can find information about configuration of SMART in proxmox here.


    https://forum.proxmox.com/thre…4-3-with-s-m-a-r-t.29514/

    There is a difference between "works" and "entries cleaned out from OMV".


    So even if you removed the disks, OMV may still remember the configuration.


    So that was the reason why votdev mentioned the other parts where information may be alive - like shared disks, filesystems and other stuff.


    If that is also cleaned up - you have then "only" to remove the disks finally.

    going back to 3.x would be for me the following:


    1. Note down the imporant parts of exported filesystems (Samba config, Storage part, Raid part) - best make screenshots or photos with your mobile.
    2. Shutdown OMV
    3. Pull all Raid Disks - ensure that they are not accessible by the system for the next steps.
    4. Reinstall 3.x and upgrade to latest release.
    5. Reconfigure everything (with the exception of Samba).
    6. Stop OMV again
    7. Put in all raid disks - ensure that you really have all in place)
    8. Start OMV again
    9. Look at the raid panel, find your disk.
    10. resetup the storage parts and reexport the shares.


    That should be it.


    If you are using other Plugins, you need to note down all settings for that as well and if possible do a backup of the settings (e.g. Couchpotato and others). Also you want to create a backup of your running system (not the data disks) to ensure you have all data available you need.

    Thank you for your reply


    I guess my intention was to have 7TB of disk space that was mirrored across the four drives so that I could have some real-time mirroring. I'm not too hot on RAID setups so mirror seemed simple yet reliable enough for me. Does the way you suggest mean that I would have to rely on a delayed scheduled job to run before the data is synced?


    Thank you

    Hi


    Do you have LVM2 on top of the raid in use?


    If so the easiest way is to create a second (new) raid 1 on the two 3 TB disks and create a physical volume within the same volumegroup on it. After that you can grow the logical volume by 3TB and then grow the filesystem finally.


    That would end up like this:


    Raid 1: 2x4TB - MD1 : pv1
    Raid 2: 2x3TB - MD2 : pv2
    DG 1: consist out of PV1 and PV2
    LVOL1: running accross PV1 and PV2 (concatenated).
    FS in LVOL1


    If you have no LVM at the moment but you have enough space (or can temporarily move stuff) you can do the following:


    Create a second MD with 2x3TB drives (MD2).
    Create a new volume group
    Create a new physical disk on MD2 for the new volumegroup
    Create a new logical device on the Volume group (full extend)
    Create a new filesystem on the new device ...
    Then copy everything (cp-a) from old disk to new disk. Ensure that the drives are not actievly served anymore.


    After that has finished - create a new phyiscal Drive on MD1 (your old 2 TB).
    Add the PD to the volume group
    extend the volume and the filesystem
    Done


    You need to unmap and unmount everything from OMV guy before you start, as this operation conflicts with /etc/fstab and mountpoints and OMV might now mount anything anymore after you have manually done all this.

    Dann bin ich woihl aufgeschmissen. Scheinen ja dann zwei Platten kaputt zu sein :(

    Hi,


    nein - wenn zwei Platten kaputt wären, dann könntest Du das Filesystem überhaupt nicht mehr mounten.


    Ich weiß nicht, seit wann das so ist - aber früher war das mal so, daß einem OMV immer ein degraded Array hingestellt hat. Das war ein bissl unglücklich war aber glaub ich noch in OMV 1 ...


    Die Frage wäre für mich, warum die vierte Platte aus dem Raid geflogen ist.


    Automatisch baut er nur eine Hot Spare ein - die Du aber mit vier Platten sicher nicht hast.


    Gruß
    Ser

    I do not get your question.


    So the OP asked about the fastest way - and that is definately copy or rsync within the box. However it also requires some changes for the filesystems and shares afterwords (all of them).


    I do not even get what you want to do.


    Do you mean, you want to add the 4TB disk to the system and create and raid array out of it and then grow the array?


    That might potentially work... not sure if it supported by the GUI.

    Hi guys,


    I now was absent this forum for a long time and need some consulting on storage myself.


    I for myself live still in the old world, where Raid+LVM+FS is a great thing. I still believe in the strength of it and in the pros it comes with. Of cause I also know the downsides of it. In other words, I am expert level knowhow on the old school thing.


    However, I am not sure if that is today still the best way for the use case of my NAS. So I am looking into some consulting from you guys, about how to setup my new NAS (of cause OMV :) ).


    The NAS contains today out of 4x4 TB WD Red forming a Raid5 without spare. I only use it for private use and store converted BDs, movies, pictures etc. pp. on it. It has a very small section for other personal data on it. The personal data are regulary backuped, the media files are currently not at all. I do rely on redundancy as user errors (aka delete), are not possible from anyone else then me :)


    However I like to put in now larger disks. Those disks will contain the same profile as before and the same usage. Of cause the current setup is very space efficient, as I only lose 1 disk.


    Question:
    What other setup will give me redundancy, maybe with snapshotting offering some protection against massive deletes (I do mistakes) or beloved Ransomeware.


    I am open for any suggestions and will then investigate into the solutions.


    If you could answer with a short recommendation and the main benefits would be greate.


    Highly appreciate your feedback


    Ser

    Using the system disk as also disk for storage (shared etc) is not supported out of the box.


    There is a long thread regarding this topic with a brief explanation on the start.


    How to partition and use OMV system disk for user data


    But again, this is not supported out of the box and should be done only by advanced users.


    The better way is to run OMV from an USB Stick (please activate the plugin for USB sticks - openmediavault-flashmemory) and to use your disk as the data disk only (no system on it).


    You also need to run backups, as a disk failure will be a total data loss.

    Also ich kann hier nur auf den 1. Therad referenzieren.


    Da steht ein Teil der Antwort was auf jeden Fall mal falsch ist, und gerne auch immer wieder falsch gemacht wird.


    Grundsatz:
    Auf Platten die ausschließlich in einem Raidverbund sollen, gehören im Grundsatz keine Partitionen, ja noch nicht mal eine Partitionstabelle.
    Das Problem besteht mit Sicherheit darin, daß mdadm nicht dir richtigen Partitionen scanned und sicher keine mit GPT. Auffällig ist schon , daß blkid nichts anzeigt von den Platten.


    Es gibt natürlich wie immer Ausnahmen für sonderfälle, auf die will ich hier aber nicht eingehen.


    So also was bedeutet das:


    ACHTUNG ALLE DATEN SIND DANN WEG!
    Alle platten wipen und zwar richtig:
    dd if=/dev/zero of=/dev/sdX size=2M


    hier ist darauf zu achten, die richtigen sdX auszuwählen, also z.B. sdb oder sdc etc. pp. Achtet unbedingt darauf nicht die Platte zu nehmen wo OMV installiert ist - dann ist da nichts mehr drauf und Ihr könnt von vorne Anfangen.


    Danach dann mit OMV wieder die Platte zu einem Raid verknüpfen. Dann die nächsten Schritte (LVM/Filesystem etc. pp.).

    It is possible in the following steps (generic, and you need a lot of knowledge on raid to do it correct).

    • Change your raid from raid5 to raid 6. You need one additional disk for it. http://ewams.net/?date=2013/05…g_RAID5_to_RAID6_in_mdadm
    • upgrade all disks in your raid from 2TB to 4TB.
      • fail 1 active disk in your raid. Raid is now degraded
      • Replace the failed disk with the bigger one.
      • Reintegrate the disk into raid and start rebuild.
      • After rebuild has finished start over from 1 until all disks are 4TB disk
    • Up to this point, your raid has not grown and is still a raid with 2TB on each disk. So now grow your raid, so that it uses the full capacity on all disks. (next rebuild).
    • After that you have the new raid in raid 6 and you cann add disks to it with 4TB. Actually again that is a rebuild every time and will take quite some time.


    So that is not the fastest way and you still should do a backup before all doing this and you should really know what you are doing, otherwise you will risk and most likely loose all data.


    So the other method (start from scratch with a raid 6 and use backup/restore should be much safer and also much quicker in total runtime of the whole operation.

    Nur um es noch mal klar zu stellen:


    Raid Clean - heißt: Das Raid ist okay und ist korrekt erkannt worden.


    Nach Neuinstallation müssen alle User neu erstellt werden (die sind nicht auf dem Raid) und dann auch noch die UIDs/GIDs der Ordnerstrukturen angepasst werden.


    Guck Dir mal die Berechtigungen im Filesystem in Frage an (ls -l directory). Dann stehen da die UIDs/GIDs drin. Wenn die nicht mit den Usern des ftp zusammenpassen dann must Du noch einen chown drüber jagen.


    z.B. chown -r ftpuser:ftpgroup verzeichnis


    Das legt den User und die Gruppe auf die richtigen fest für eine Verzeichnis uns alles was darunter hängt ( -r Option).


    Das sind jetzt hingegen wieder Unix Grundlagen. Wie gesagt, wenn Du manuell einen User angelegt hast (mit oder ohne OMV Guy), dann ist nicht sichergestellt, daß der die gleiche UID/GID bekommen hat, wie der User vorher.

    OMV itself does recognize the grown MD size.


    Hoever you need to grow your filesystem on it as well. You can do this in OMV section


    OMV->Storage->File Systems


    If you select the File System you can grow it.


    The Risk of the mdadm operation is the same regardless of Raid6/5. Also I do not really see the risk in there. The activity incorporates to recalculate the parity for all data area, to reflect the number of disks in this raid. The only risk might be a complete power loss.


    I have not tested it, but afaik the operation of adding is a very short one and then the raid is in degraded mode. So if you do not have a disk failure during the rebuild, you should be fine.


    Also a good backup is allways advised for those operations.


    Regards
    Ser

    Sieht eigentlich gut aus. kannst du mal die mdadm.conf schicken?


    Kann es sein, daß nicht alle Platten gescannt werden, oder eben zufällig welche zum Scanzeitpunkt nicht vorhanden sind?


    Wird bei Dir das md zum Bootzeitpunkt erkannt, oder erst im späteren Init?


    Evtl. würde es helfen es von Boot auf Init umzustellen, da die Hardware ggfs. zum Bootzeitpunkt nicht alle drives zuverlässig schon parat hat.