Beiträge von q2m2v

    So my USB memory stick is starting to die apparently. I purchased it only a few months ago, so I wasn't expecting it to fail this quickly (even using the FlashMemory plugin). But here we are. Anyway, I created an image of the USB drive and transferred that image over to an SSD. When I try to boot off the SSD, it seems to work okay, except it keeps spitting out this error every few seconds:


    Code
    xxx.yytimestampyy] usb usb9-port2: Cannot enable. Maybe the USB cable is bad?


    I assume it's telling me it can't find the no longer plugged in USB drive.


    Is it safe to keep using OMV like this? And if so, is there a way for me to stop it from checking for the removed drive?


    Many thanks.

    I also received the alert again after RAID recovery, which is not a huge deal. I did however notice this in my mdadm.conf though:


    Code
    # This file is auto-generated by openmediavault (https://www.openmediavault.org)
    # WARNING: Do not edit this file, your changes will get lost.
    
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.


    The warning seems to run counter to the working fix in this thread. Also, I tried to look for "mdadm.conf(5)", but no such file exists in the same directory. Not sure if I should be concerned or not?

    Hello,


    I'm curious about the "resource limit succeeded" alert that pops up everytime my machine does a scheduled rsync backup.


    Here are a couple examples of typical alerts:

    Code
    Description: loadavg (1min) check succeeded [current loadavg (1min) = 15.2]
    Code
    Description: loadavg (5min) check succeeded [current loadavg (5min) = 7.6]


    The PC I'm using for OMV is fairly old - i7-6700K, 16 GB - but still pretty beefy for the task I would imagine. It's a weekly backup to an external USB HDD array.


    Is this something to worry about?

    =O if it rebuilds, then from the GUI select remove in Raid Management, select the drive from the dialog, click OK and the drive will be removed from the array.

    Luckily, the dying drive managed to survive through the rebuild last week and the reconstructed array completed a long SMART test without issue. OMV has been running fine since.


    Had an odd little bug pop up where I was getting "SparesMissing" notifications even though I've never tried to install spares. Perhaps the rebuilding routine added spares=1 to my mdadm.conf? Anyway, I changed it to =0 according to this post: Remove the "SparesMissing event"


    Thank you again for your help.

    This would be the most sensible option due to the size of the drives, you would also have to remove smb shares then shared folders that are linked to the array.

    Thank you very much for the advice.


    Since I've never replaced a failed drive in a RAID array before, I decided to go ahead and rebuild the array by replacing the faulty drives one at a time, just to get a feel for it in case whenever I need to do this again in the future.


    Also, just before I was going to write this reply, one of the drives (sdb) went and failed on its own. I removed it and the array is now rebuilding with one good drive (sdc) and the other about to die (sdd). Assuming sdd survives long enough to get through the rebuild, I would then need to "fail the drive". As you mentioned, this can be done through the GUI. I wasn't able to find that option in the "RAID Management" menu though. Or is it "Remove" that does it?

    My dying HDDs:


    RAID5-001.png


    The drives above normally operate around 40-42° (not ideal from what I've read), but I took the screenshot above with the case open.


    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md127 : active raid5 sdd[1] sdc[2]      15627788288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]      bitmap: 0/59 pages [0KB], 65536KB chunkunused devices: <none>


    blkid

    (I have no idea why the UUID_SUB for the last one is so long or why my setup is md127 instead md0 or something numerically lower...)


    fdisk -l | grep "Disk "


    cat /etc/mdadm/mdadm.conf


    mdadm --detail --scan --verbose

    Code
    ARRAY /dev/md/tomanas1raid5 level=raid5 num-devices=3 metadata=1.2 name=tomanas1raid5 UUID=f1d3e5d2:a10e7ea5:82ac9b10:c0fbebcf   devices=/dev/sdc,/dev/sdd
    Zitat

    Post type of drives and quantity being used as well.

    3x Toshiba MN06ACA800/JP 8TB NAS HDD (CMR)

    Zitat

    Post what happened for the array to stop working? Reboot? Power loss?

    Array is still working, but is approaching imminent failure (I assume). I received the following email alert last week:


    ==========

    This message was generated by the smartd daemon running on:


    host name: omv

    DNS domain: mylocal


    The following warning/error was logged by the smartd daemon:


    Device: /dev/disk/by-id/ata-TOSHIBA_MN06ACA800_11Q0A0CPF5CG [SAT], FAILED SMART self-check. BACK UP DATA NOW!


    Device info: TOSHIBA MN06ACA800, S/N:11Q0A0CPF5CG, WWN:5-000039-aa8d1b7f8, FW:0603, 8.00 TB

    ==========

    (This was for /dev/sdd. I received an identical email for the other dying drive, /dev/sdb at the same time)


    I've already have a regular backup of the data and the replacement hard drives arrived a little while ago. Since all three HDDs in this RAID5 were purchased new and installed last April, I'm also quite concerned as to why two of them would start failing this early and simultaneously. But first things first, I've been reading up on how to re-construct my setup with the replacement drives and came upon this thread/post:


    What you do is fail the drive using mdadm, then remove the drive using mdadm, shutdown, install the new drive, reboot the raid should appear as clean/degraded, then add the new drive using mdadm.

    At least the above is the procedure anyway.


    I just want to confirm I'm not misunderstaing what it means to "fail the drive using mdadm" before I do something stupid. Is the following what I should be entering via SSH/CLI?

    Code
    mdadm /dev/md127 -f /dev/sdb


    Also, with 2 of the 3 drives nearing failure, I'm wondering if I should just rebuild the RAID5 from zero and copy the data over from the backup, instead of trying to replace the drives one by one?


    Thank you!


    (edit: fixed code line breaks)

    I would like to donate as well, but not through PayPal, for the reasons omv_user_3741 mentioned.


    Since it is the only choice currently, I tried to set a donation up, but was blocked due to a country/region restriction.

    Just a quick follow-up question in case anybody can help me with a slight oddity.


    I disconnected the suspect JBOD USB device, but am unable to remove the entry from the File Systems menu:

    omv-removeerror.png


    The device remains listed even when I select "Delete" and go through the prompts, and even after a reboot.


    Does this mean I need to edit fstab?

    I'm an idiot and mistook my desktop LUKS passphrase for the NAS one. Device #1 is back up and running. My apologies.


    That said, Device #2 appears to be in trouble. It doesn't even show up in Encryption menu. The S.M.A.R.T. menu lists its status as gray and the console keeps periodically reporting:


    Code
    Buffer I/O error on dev sdd, logical block 733577872, async page read...


    The size of this JBOD device is also not being reported correctly at 2.73 TB. Both drives in it are well over 5 years old, so I'm kind of assuming the worst here.

    Hello,


    I'm having trouble unlocking 2 LUKS encrypted devices after a normal reboot.

    - Device 1: 4TBx2 WD Red HDD (Raid 1, SATA)

    - Device 2: 3TBx2 HDD (JBOD, external USB device)


    After restarting, both devices went "Missing" under File Systems:

    omv-missingdevices.png


    When I tried to unlock with the usual LUKS passphrase, I got this error:

    omv-errorunlock.png


    Code
    Unable to unlock encrypted device: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; echo -n '128characterpassphrase' | cryptsetup luksOpen '/dev/md0' 'md0'-crypt --key-file=- 2>&1' with exit code '2': No key available with this passphrase.
    
    Error #0:
    OMV\Exception: Unable to unlock encrypted device: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; echo -n '128characterpassphrase' | cryptsetup luksOpen '/dev/md0' 'md0'-crypt --key-file=- 2>&1' with exit code '2': No key available with this passphrase. in /usr/share/openmediavault/engined/rpc/luks.inc:243
    Stack trace:
    #0 [internal function]: OMVRpcServiceLuksMgmt->openContainer(Array, Array)
    #1 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('openContainer', Array, Array)
    #3 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('LuksMgmt', 'openContainer', Array, Array, 1)
    #4 {main}

    Here is additional info about my setup:

    Code
    root@openmediavault:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    unused devices: <none>


    Code
    root@openmediavault:~# blkid
    /dev/sdc1: UUID="f73c8eb5-3bfc-49d4-966c-84b3efc8c980" UUID_SUB="a6fa393d-d41c-b0f8-3b06-bf6ef9108f6d" LABEL="tomatonas1:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="dd328dae-df66-44d4-803d-b167b6bcd959"
    /dev/sdb1: UUID="f73c8eb5-3bfc-49d4-966c-84b3efc8c980" UUID_SUB="011ff11e-1a2b-fcfb-043a-c414acde479d" LABEL="tomatonas1:0" TYPE="linux_raid_member" PARTLABEL="Linux RAID" PARTUUID="9b36edf1-7c1c-4ae2-8e5e-8af112dd0e55"
    /dev/sda1: UUID="EAE0-223A" TYPE="vfat" PARTUUID="2e5ca618-f17a-4214-9f06-199100177543"
    /dev/sda2: UUID="5e6afd77-93a0-4ad1-a115-e868924fa750" TYPE="ext4" PARTUUID="090f26b9-ace9-4a99-87bc-2693a086cf04"
    /dev/sda3: UUID="9feb9a6b-c9c0-48e6-a010-a9b9848a120f" TYPE="swap" PARTUUID="e28ca370-f90c-4204-b900-c531e0177fb1"



    Code
    root@openmediavault:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/tomatonas1:0 level=raid1 num-devices=2 metadata=1.2 name=tomatonas1:0 UUID=f73c8eb5:3bfc49d4:966c84b3:efc8c980
       devices=/dev/sdb1,/dev/sdc1



    Any advice would be much appreciated. Thank you.

    Hello,


    I hope I'm not overlooking anything obvious, but I'm trying to install Nextcloud using Techno Dad Life's guide:

    Externer Inhalt youtu.be
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    When attempting to search for "linuxserver/mariadb", nothing turns up. Other images beginning with "linuxserver/" return results normally.


    Am I doing something incorrect?