Beiträge von PostalSete

    I have four drives in a RAID array and recently one of the drives has gone missing in OMV. I no longer have access to the filesystem and the RAID array no longer shows in OMV as well.


    My setup is 4 HGST SAS drives running into a LSI SAS9201-8i controller. As a troubleshooting method to test for faulty cables or a faulty controller I reversed the order of the cables I had plugged into the drives. I figured if one of the cables was bad I'd see a different drive missing, but all drives serial numbers stayed the same. Their /dev/ paths did change though when I checked in the Storage -> Disks sections of OMV. I'm not sure if that's relevant.


    We had a power cut one night around the time I noticed this issue. I can't say for sure if it popped up then as I wasn't actively using my NAS until a few days after the power cut.


    Doing a scan for drives in Storage -> Disks gives a "communication error" with no extra details.


    Below is the output of several commands I saw to run / post here in these types of cases. I'm fairly technical but know very little about Linux and OMV.


    Looking for any guidance on this! Thanks!


    Code
    cat /proc/mdstat
    
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : inactive sdb[0](S) sdc[1](S) sda[2](S)
          11720659464 blocks super 1.2
           
    unused devices: <none>
    Code
    blkid
    
    /dev/sdf1: UUID="809af825-9205-44fb-af15-8ee268f3eb28" TYPE="ext4" PARTUUID="6a551e78-01"
    /dev/sdf5: UUID="367ede47-636a-41ee-a50c-91e1ed1e9b9c" TYPE="swap" PARTUUID="6a551e78-05"
    /dev/sde1: LABEL="Public" UUID="9ad1b12e-e18c-4d51-9439-11cb5245fc81" TYPE="ext4" PARTUUID="079114d8-1da9-4eb0-a5eb-f81480cb9ec7"
    /dev/sdc: UUID="28c73d6f-daf3-2b3b-6c7b-9ad0f7e5954f" UUID_SUB="dd1a4324-4be3-ee22-4c43-a98be7f16dd9" LABEL="NAS.local:data" TYPE="linux_raid_member"
    /dev/sdb: UUID="28c73d6f-daf3-2b3b-6c7b-9ad0f7e5954f" UUID_SUB="d41c4964-c59c-b1bf-7798-2a4cccecf19d" LABEL="NAS.local:data" TYPE="linux_raid_member"
    /dev/sda: UUID="28c73d6f-daf3-2b3b-6c7b-9ad0f7e5954f" UUID_SUB="adb24629-4d07-23a7-f581-686c9c7653b6" LABEL="NAS.local:data" TYPE="linux_raid_member"
    Code
    mdadm --detail --scan --verbose
    
    INACTIVE-ARRAY /dev/md0 num-devices=3 metadata=1.2 name=NAS.local:data UUID=28c73d6f:daf32b3b:6c7b9ad0:f7e5954f
       devices=/dev/sda,/dev/sdb,/dev/sdc

    Hi I recently followed TDL's tutorial for AdGuard (

    Externer Inhalt youtu.be
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.
    ) and everything seemed to be working well until my nightly server reboot.

    I double checked to make sure that the files edited in the command line of this video are correct and it looks like they are... but I am unsure of how to check if the initial ethernet connection changes made in the GUI earlier in the video are correct.


    Now it seems my NIC is not working (no lights and no connection at Ethernet port). I ran ip addr and got a result of: mtu 1500 qdisc noop state DOWN group default.


    I have tried running omv-firstaid but get an errno 11 - temporarily unavailable error.


    I am running OMV 5 with no wild hardware (everything was stable before this change).


    Any ideas on how I may be able to remedy this? Much appreciated!