Posts by SeaBee

    Why I've just run this up on a VM Raid 0 with 4 drives, the config files are exactly the same as yours and it rebooted without issue, the missing firmware means for whatever reason that module is missing and would probably not affect your system anyway, but I've run that on the VM and it's fine no missing modules, and my kernel is the same.

    weird, might just be something up with my hardware or something.


    well at least the main issue is fixed.


    Thanks a ton for your help :)

    huh, ok so i took a chance and replaced the GRUB_CMDLINE_LINUX_DEFAULT="quiet" line with GRUB_CMDLINE_LINUX_DEFAULT="raid0.default_layout=2"


    then ran the update-grub command and rebooted the machine and it looks like it worked!


    the raid array is still present!


    also after running the update-initramfs -u command from before only one issue is present, should i make a new post somewhere else or keep trying in this thread?


    Code
    update-initramfs: Generating /boot/initrd.img-5.5.0-0.bpo.2-amd64
    W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168fp-3.fw for module r8169

    According to that output and that article the default layout should be 2, that should be set in /etc/default/grub again the article says there should be a line in grub that states GRUB_CMDLINE_LINUX_DEFAULT="raid0.default_layout=2"

    is it ok to sudo nano into the grub file, edit it and reboot?




    my current grub file looks like this:



    would it be safe to just replace the GRUB_CMDLINE_LINUX_DEFAULT="quiet" line with GRUB_CMDLINE_LINUX_DEFAULT="raid0.default_layout=2"

    ok so i found another thread that had a similar issue for one of the missing firmware issues (here) and found that:

    cd /lib/firmware/rtl_nic


    and


    wget https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/plain/rtl_nic/rtl8125a-3.fw


    then i updated it again with


    update-initramfs -u


    but im still left with this one missing firmware issue:


    Code
    update-initramfs: Generating /boot/initrd.img-5.5.0-0.bpo.2-amd64
    W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168fp-3.fw for module r8169

    ok, so mdadm --detail /dev/md0 gives:


    that gives some firmware missing warming:


    Code
    update-initramfs: Generating /boot/initrd.img-5.5.0-0.bpo.2-amd64
    W: Possible missing firmware /lib/firmware/rtl_nic/rtl8125a-3.fw for module r8169
    W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168fp-3.fw for module r8169


    Also it looks like it could be some network firmware issue and just for some extra info, i don't have any wireless card installed, my server is just connected through the motherboards ethernet port.

    here's the output from cat /etc/fstab


    also the way i am able to make the array appear after every reboot is by having to issue these 3 commands:


    echo 2 > /sys/module/raid0/parameters/default_layout


    then...


    mdadm --stop /dev/md0


    then...


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abce]



    and that brings the array back

    ok, so here goes attempt 2.


    I recently wiped my old OMV version 4 server and installed OMV 5 (5.5.2-1), which has been going great, until i discovered a bug i started having an issue in OMV 4 where my RAID 0 array would disappear after every reboot and would have run some commands just to make it come back.


    My hardware layout is as follows:


    - sda - 4TB Storage Disk

    - sdb - 3TB storage Disk

    - sdc - 2TB Storage Disk

    - sdd - random USB i had in the machine

    - sde - 2TB Storage Disk

    - sdf Boot Drive



    Also here are some command i ran which can hopefully help to speed up a fix :)



    cat /etc/mdadm/mdadm.conf





    fdisk -l | grep "Disk " | grep sd | sort


    Code
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdd: 7.5 GiB, 8053063680 bytes, 15728640 sectors
    Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdf: 111.8 GiB, 120040980480 bytes, 234455040 sectors




    cat /proc/mdstat


    Code
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid0 sda[0] sde[3] sdc[2] sdb[1]
          10743784960 blocks super 1.2 512k chunks
    
    unused devices: <none>



    blkid

    Code
    /dev/sda: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="02249442-0c9b-53e9-0680-f81e34c7a115" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdc: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="d81adc61-5679-43cb-44d4-897697ae6797" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdb: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="72eca10b-a959-b3d9-2cf3-31ba469ea87e" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdd1: LABEL="OPENMEDIAVA" UUID="48A6-D0E3" TYPE="vfat" PARTLABEL="Microsoft Basic Data" PARTUUID="cee497a1-4ba6-4a90-ac8e-e3d509963f8b"
    /dev/sdf1: UUID="0925-8731" TYPE="vfat" PARTUUID="8df33f5f-045b-40e0-9069-417541e8b206"
    /dev/sdf2: UUID="e82406a6-2d1e-40a6-8407-39e6244667a3" TYPE="ext4" PARTUUID="270da3fa-168e-4b5a-8660-ad5ee2087e92"
    /dev/sdf3: UUID="c7521f20-4245-4779-897a-a168f27daf98" TYPE="swap" PARTUUID="c33c815f-d140-411f-8be2-ebaadf671173"
    /dev/sde: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="c21c472f-43f2-546f-a21f-e7f300d1d143" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/md0: LABEL="CorePoolData" UUID="6286e1c0-da5a-481e-98e5-5efc57f0463c" TYPE="ext4"




    fdisk -l | grep "Disk "




    mdadm --detail --scan --verbose


    Code
    ARRAY /dev/md0 level=raid0 num-devices=4 metadata=1.2 name=cb-server.cb-server:CorePool UUID=87674581:aa4aabfe:9312d183:a8ed4906
       devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sde



    Any help would be greatly appreciated :)


    *Also yes i know RAID 0 is stupid with the 1 disk dying casing an entire loss of data problem but i have weekly backups which works fine for me*

    ok, so i was reading through another form and i found someone else tried this command and it seems to work and fix my problem :D


    echo 2 > /sys/module/raid0/parameters/default_layout


    then i had to stop the disks again because they were busy


    so i did:


    mdadm --stop /dev/md0


    and finally the RAID command again:


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]


    and it's back!



    Thanks so much geaves for the stop command, looks like that's what fixed it!

    i managed to stop it ok, but the second command gave an error 524 :(


    Code
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
    mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 2.
    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 3.
    mdadm: added /dev/sdb to /dev/md0 as 1
    mdadm: added /dev/sdc to /dev/md0 as 2
    mdadm: added /dev/sdd to /dev/md0 as 3
    mdadm: added /dev/sda to /dev/md0 as 0
    mdadm: failed to RUN_ARRAY /dev/md0: Unknown error 524

    Hi All,


    So i was having some issues updating PLEX so i decided to reboot my system, but after i rebooted it my RAID array dissapeared.


    System details are:

    OMV Version: 4.3.35-1 (Arrakis)

    Kernal Version: Linux 4.19.0-0.bpo.8-amd64


    Also my disk layout is as follows:

    sda: Storage HDD 1

    sdb: Storage HDD 2

    sdc: Storage HDD 3

    sdd: Storage HDD 4

    sde: Boot SSD


    I've also tried a few other threads and a few other commands, hopefully this might be a bit of help.


    cat /etc/mdadm/mdadm.conf




    fdisk -l | grep "Disk " | grep sd | sort

    Code
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    
    Disk /dev/sde: 111.8 GiB, 120040980480 bytes, 234455040 sectors



    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]

    Code
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping



    cat /proc/mdstat

    Code
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sda[0] sdd[3] sdc[2] sdb[1]      10743789056 blocks super 1.2


    blkid

    Code
    /dev/sda: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="c5ee2f55-4f93-8b6b-e330-f54589e9a3d8" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdb: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="c7de846a-b549-e1dd-bb97-14d6bf392102" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdc: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="c87cd183-dabe-92d4-6db3-d65d117fe444" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdd: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="5186cf9d-59ff-ec57-254b-f085e33e529d" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sde1: UUID="5AB9-B88F" TYPE="vfat" PARTUUID="f6477c78-7c5e-4f1a-9775-55554ec754fa"
    /dev/sde2: UUID="b0affabe-1dae-479d-bd18-10500434d0ad" TYPE="ext4" PARTUUID="9300084e-f3ae-4fee-b931-2cfd1fb5b6dd"
    /dev/sde3: UUID="a95d884f-b667-4f4e-b003-8a461b653c82" TYPE="swap" PARTUUID="1a637345-b2e8-491f-b181-35633d447317"




    fdisk -l | grep "Disk "

    Code
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sde: 111.8 GiB, 120040980480 bytes, 234455040 sectors
    Disk identifier: B1773E09-BD8B-471C-A22D-AA686CA7A7A7



    cat /etc/mdadm/mdadm.conf



    mdadm --detail --scan --verbose

    Code
    ARRAY /dev/md0 level=raid0 num-devices=4 metadata=1.2 name=cb-server:CorePool UUID=a1c01a92:146a9649:e1828fba:2a7e3bd0   devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd

    I'm trying to install the BTSync plugin from the OMV extras but when i try to install it i get this error :(


    Also im using V 2.2.13


    >>> *************** Error ***************
    Failed to execute command 'export LANG=C; export DEBIAN_FRONTEND=noninteractive; apt-get --yes --force-yes --fix-missing --allow-unauthenticated --reinstall install openmediavault-btsync 2>&1': Reading package lists...


    Building dependency tree...


    Reading state information...


    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:


    The following packages have unmet dependencies:
    openmediavault-btsync : Depends: btsync (< 2) but it is not installable
    E: Unable to correct problems, you have held broken packages.
    <<< *************************************