Beiträge von Askbruzz

    can someone explain me this?

    Oh man, thanks you mery much. <3

    Good morning,

    i need help.


    One of my HDDs in my RAID 6 broke, so I shut down the NAS, removed the faulty HDD, and replaced it with a new one. The problem is that I no longer see my RAID 6.


    I did some big mistake to removing the HDD without first removing it from the array. I do not have anymore my old drive.


    Total number of hdd:9


    The code below are make without the new hard drive installed.

    Code
    root@NasOMV:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdg[7](S) sde[3](S) sdf[6](S) sdc[1](S) sdh[8](S) sdb[0](S) sda[5](S) sdd[2](S)
          93750063104 blocks super 1.2
    
    unused devices: <none>


    Code
    root@NasOMV:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md0 num-devices=8 metadata=1.2 name=NasOMV:BigData UUID=791d5068:980fb9e9:3729dd39:f33f2a59
       devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh
    root@NasOMV:~#

    The code below are make with the new hard drive installed.


    There is something that i can do?


    Thanks you very much

    I don't know, but what you have to remember in a raid setup that it's written across all drives, most home users don't need to use a raid option there other ways to store data. Raid options are driven by hardware vendors and most users think that that is the norm, if someone wants to use a software raid they need to understand how it works and how to recover from it.

    What you have set up is something I would never ever consider, nine drives in a single array, mergerfs and snapraid would be a better choice.

    You say the nas is noisy, I take it that is from the drives, if so it's something I experienced some time ago, but that was from old hardware and older drives.

    I understand your point of view, but when someone it's newbie like me the error are easy to do.


    the only problem it's that the nas now it's noise because the drivers write every 5 second, before to expand the raid the nas was silent and the drives going to sleep.

    :)


    I think now it's to late to make change :(

    Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)


    According to what I have read this will stop, but due to the number of drives in the array it could take some time.

    Sorry, but one more question, it's normal that the block writed are ever the same?

    Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)


    According to what I have read this will stop, but due to the number of drives in the array it could take some time.

    Thanks you so much.

    I hope this will end soon :D

    I try to wait a couple of days ans see what happen :3:)^^

    Now are 2 days that this thing doesen't stop.

    That always makes me :) when that is quoted, you need to check under Storage -> Disks -> SMART and run a Long self test on each drive and review the output.

    Thanks, make a long test now :)

    When you say formatted I assume you mean wipe and judging by that log you now have 9 drives in that raid 6.


    What's the output of cat /proc/mdstat

    Here is the output:



    What you would do is to run fsck /dev/md0 what is fsck

    Here is the output:

    Code
    root@NasOMV:~# fsck /dev/md0                                                                                                                                                                                                                                                                                                                                                                fsck from util-linux 2.29.2                                                                                                                                                                                                                                                                                                                                                                 e2fsck 1.44.5 (15-Dec-2018)                                                                                                                                                                                                                                                                                                                                                                 NasData: clean, 150847/1281738752 files, 7788695376/20507820032 blocks                                                                                                                                                                                                                                                                                                                      root@NasOMV:~#                                                                                                                                                                                                                                                                                                                                                                              

    Not necessarily, if you google the information from the log it has something to do with the recent raid expansion, what is the condition of the drives, i.e. any bad sectors being referenced in smart, any smart errors particularly 5, 197, 198. You might need to run fsck on the raid itself.

    Hello and Thanks :)

    I have checked and, i think, there is no SMART error.

    The 4 disks used to expand the raid was used in a Synology NAS and before expand the OMV raid i have only formatted the disk in rapid mode.


    fsck i have no idea how to use it, i'm a completly noob with OMV ;(

    Thanks you so much.


    I have found this:


    What can be? :D

    good morning,

    I have a problem with disk usage after expanding the RAID and the File System (via WebGui).

    now, as you can see in the image, the disk is always in constant use and I can't understand why.

    I'm new to OMV and I don't really know what to do.


    Before expanding the raid the disks goes to spinndown but now no more.

    do you have any advice?

    Please help me :(


    Thanks so much.

    Thanks for your tips :)


    If i will make 2 different system, one for storage and one for Plex, when plex make a transcoding, the lan can make problems? I mean bandwidth problems


    at the moment i have a backUp on 3x8Tb external Driver and a secure backup on amazon aws, but on Amazon i store only the most important things.


    About re-encoding at the moment its not possible due the few free Tb on my NAs