Disks disappeared after power outage

  • Hi all the other day we had a sudden power outage. When I rebooted all my data disks had disappeared just says missing in OMV gui.
    And don't show in physical disks or in the dropdown list.
    I don't want to think about loosing my data.

    • Offizieller Beitrag

    The first thing to do is a real cold reboot. That would be power down and unplug the PC for at least 30 seconds. Then plug it back in and start it again.


    If the disks don't show up, download, burn a CD, and try a live Linux distro like Knoppix. If you have standard Linux formats (EXT2,3, or 4, BTRFS, etc.) Knoppix should detect the disks.

  • I forced something similar to a power outage. I disconnected one drive from power (still in trial phase).
    Disks were empty and doing nothing.


    Nevertheless disk disappeared from Dorpdown lists and I had to wipe it (althoug it was empty). Only that brought it back into the dropdown and made it ready for reorganizing the raid 5 I had build with it.


    I am a rookie in Linux so I am very interested in how this tread continues..


    Do you too have a Raid? What kind?

    • Offizieller Beitrag

    If you wiped the drive(s), that's it. There's no recovery.
    ____________________________________________


    I'm using a ZFS mirror (Referred to as a "zmirror"). It's the rough equivalent of RAID1. If you're planning on rebuilding an array and you have 8GB of RAM or better, I'd recommend ZFS. "RAIDZ1", in ZFS, is the equivalent of mdadm RAID5 but it's far better.


    If you're interested, I can help with a couple pointers and a few command lines to achieve POSIX (Linux) permissions. (ZFS was born on Solaris so a few adjustments are need.)
    ____________________________________________


    But, before going there, are you sure your hardware is OK? (SATA ports, etc.?) I've seen strange events before but I've never heard of losing all disk contents after a power outage.

  • Sorry,


    but I think there is a misunderstanding.


    I am in the trial phase still and thought I'd try what happens if I ever need to swap a disk if it fails.
    Hence I thought I force an error on an (empty) Raid5 of loosing a disk through Power outage on one disk.


    So some questions:
    1. If this happens, do I HAVE to wipe that disk that is not recognized and reconstruct the array? Is there no other way other than wiping the disk?
    2. in ANY power outage condition where usually ALL disks will fail at once, will ALL Data on a RAID5 will be lost?
    Isn't there something like a consistency check for bad data or checkdisk?
    in real life only the data currenly written to disk will be damaged...everything else should be ok.
    And the Raid5 Array in OMV should be able to cope with that.


    If there is nothing that can handle this I definitly will not/cannot use OMW :(

    • Offizieller Beitrag

    Yeah, well, I didn't know you were just experimenting. If you reread your first post, especially the part about not wanting to lose data, there would be no way to know that data didn't exist.
    ___________________________________________________________
    (Keyed to your questions:)


    1. There's no specific answer to this question because it's too broad. There are numerous possible reasons why a disk (or disks) might not be recognized in an array that range from loose or bad cables, excessive errors, up to failed disk(s). Corrective actions will vary depending on what went wrong. What happened in your case? I have no idea and there's not enough information provided to speculate. However, losing ALL data in an mdadm array after a power outage is not a common event.
    Since it appears as if you're looking for answers to failure modes for Linux software RAID, give this page a read. mdadm RAID. (Specifically, look at "When things go wrong".)


    2. This is another question that can't be answered specifically. First off, realize that traditional RAID (mdadm software RAID or even arrays created on a hardware adapter) suffers from "the write hole". That means that data that is being committed to disk can (probably will) be lost in a power outage. The solution for this issue is simple - get an UPS. They're used in nearly all commercial data centers and they should be used with home servers which apply the same technologies.


    Where you're talking about "checkdisk", or the Windows version which is chkdsk.exe, Linux filesystems have their version of it, but it depends on the one you use. EXT4's version of it is fsck. However the two, mdadm RAID (a disk aggregating technique) and EXT4 (a filesystem), are unaware of each other. (The same would be true of Windows/NTFS on software or hardware RAID.)


    This is why I mentioned ZFS RAIDZ1. ZFS is CoW (copy on write) so there's no issue with the write hole or data lost in power outages (in theory). ZFS is a disk aggregator, a file system, and logical volume management all rolled into one but using it would require a bit of research on your part. It does add some admin complexity.
    ___________________________________________________________


    About whether or not to use OMV - I'd ask you (rhetorically speaking) how do you know the problem is with OMV and not your hardware? For information purposes, OMV is based on Debian Linux which is rock solid and probably the most actively built and used mainline distro available. (Mint, Ubuntu and many others are based on Debian.) That means if your hardware actually has a compatibility problem, (I tend to doubt that) the problem is with Debian.


    Since you're exploring, maybe you should look around a little more and give some thought to configuring up a few servers in VM's. I used Virtual Box on a Windows client with a fast CPU, 16GB of RAM and a good sized disk, during my search. I configured a number of servers virtually before I finally settled on OMV.


    In any case, good luck in your search.

  • thanks FLMAXEY.


    sounds encouraging... I had a bit of a downpoint on the whole project due to this.


    Write hole understood. Can fsck fix issues caused by this (meaning saving the rest of the files even if the one written is corrupt?


    I am looking into it a bit more again now. Your word build up some confidence again...
    Some more background: My disk array was build and could be accessed without issues. It was a trial "what happens when".


    ZFS sounds promising but this is not the topic here.
    Only that much isn't btrfs the better choice as its developed for linux instead of Solaris Unix?


    mdadm and ext4 being separate and "unaware" of each other is known. I honestly didn't know zfs and btrfs fixes this...


    Not sure though if my weak hardware can cope with zfs or btrfs (E450 AMD APU and currently dual channel 4 GB RAM).


    EDITH: You mentions btrfs is not like zfs?


    ... thanks anyway

    • Offizieller Beitrag

    BTRFS works fine for single disks. I'm using it for a single disk with a weak ARM-HF platform (a Raspberry Pi) with 1 whole GB of RAM. The repair utility is btrfs check --repair. I've had to use the utility on one occasion and it straightened out a few issues.


    However, if you want a RAID5 type array, BTRFS is not quite there yet. -> BTRFS project status. (Look at the entry for RAID56) If BTRFS was a bit more mature, I'd be using it. It has some great features, but for what I'd want it for (bitrot protection and solid reliability), it's just not there yet.


    You could use BTRFS as a filesystem on top of a traditional mdadm software RAID array. Another user on the forum is doing just that, in an attempt to check out his hardware. ((BTRFS can run data scrubs that, along with file checksums, can detect very subtle file errors and corruption.))


    For ZFS, a weak(ish) CPU wouldn't be the issue, as much as low RAM might be. For a home file server, you might be able to get away with 4GB, but adding 4GB for a total of 8GB would be better and adequate for the purpose.


    Integrating Volume manage (RAID and other functions) with the filesystem (so they're aware of each other) is a pretty big deal. Adding to that, checksummed files for bitrot protection and self healing files, and copy on write (no write hole even in a power outage) is pretty much the holy grail for a home server filesystem. (In my opinion, of course. :) ) ZFS lacks a few things BTRFS will have when it's ready, but for now ZFS is about as good as it gets.
    _________________________________________


    There are other ways to aggregate multiple disks that have little to no CPU load (mergerfs or LVM2). Mergerfs will work with the tried and proven EXT4 filesystem (and others). In addition, RAID like protection/backup can be had with SNAPRAID. Mergerfs with SNAPRAID is, arguably, better than traditional RAID for aggregating disks and for recovery. (And there are those on the forum who would make that argument.)


    Of note is that OMV supports ALL of the above.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!