what i have to do to mark as solved?
Beiträge von Askbruzz
-
-
can someone explain me this?
Code
Alles anzeigenA SparesMissing event had been detected on md device /dev/md0. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdf[9] sdb[0] sda[5] sdg[6] sdh[7] sdi[8] sde[3] sdd[2] sdc[1] 82031280128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9] [UUUUUUUUU] bitmap: 0/88 pages [0KB], 65536KB chunk unused devices: <none>
-
Hi, the raid is still here but in inactive status. It is easy to recover with below commands:
Codemdadm --stop /dev/md0 mdadm --assemble /dev/md0 --uuid=791d5068:980fb9e9:3729dd39:f33f2a59 cat /proc/mdstat
If the you can read the status of raid is going to "active" at third step, you will read the raid on omv again in degraded status. Then you can recover your raid on web GUI of OMV again.
Keep in mind that never restart or shutdown the machine when you got the disk trouble. You have to remove the bad disk in the web GUI first, afterword you can shutdown the machine and change the physical disk.
Good luck
Oh man, thanks you mery much.
-
Good morning,
i need help.
One of my HDDs in my RAID 6 broke, so I shut down the NAS, removed the faulty HDD, and replaced it with a new one. The problem is that I no longer see my RAID 6.
I did some big mistake to removing the HDD without first removing it from the array. I do not have anymore my old drive.
Total number of hdd:9
The code below are make without the new hard drive installed.
Coderoot@NasOMV:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdg[7](S) sde[3](S) sdf[6](S) sdc[1](S) sdh[8](S) sdb[0](S) sda[5](S) sdd[2](S) 93750063104 blocks super 1.2 unused devices: <none>
Code
Alles anzeigenroot@NasOMV:~# blkid /dev/nvme0n1p1: UUID="40DD-D47C" TYPE="vfat" PARTUUID="b3fb1408-064f-4a09-ba33-4c0dfc46d0d7" /dev/nvme0n1p2: UUID="d9e77154-7735-4488-a782-083dcb89d261" TYPE="ext4" PARTUUID="fa9a2270-ed17-4caa-af92-e29537716c05" /dev/nvme0n1p3: UUID="78e32ede-6f24-47f3-b707-b866dada9262" TYPE="swap" PARTUUID="64150c69-ad92-45ff-bc9e-d1f659ac6034" /dev/nvme0n1p4: LABEL="Cache" UUID="ef52ac66-19d2-4ef2-aee2-de73e95c10c4" TYPE="ext4" PARTUUID="695f402c-954a-4df8-bc87-92cfe7e5846c" /dev/sda: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="bbc608da-855e-a627-1247-9003ccaa105b" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdd: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="ccb17d79-74c2-5c80-639c-340c8972b850" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdc: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="a5a19393-bb17-8349-11c7-b65fbd529cfb" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdb: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="590a37a4-e4d1-d783-6890-d903ac5e7b36" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sde: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="f90fb80c-0e01-9857-e9fb-ff3869a64cac" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdf: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="b96039d9-95e9-24b0-efc2-0add7e7528c5" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdg: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="1a416a5c-f4f1-bd2b-0943-fb6be60054f1" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdh: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="ae8f8f25-8ac6-8d65-ae97-05c27fb8c418" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/nvme0n1: PTUUID="0fabdc47-ea67-4c86-993f-72733b6b0e5e" PTTYPE="gpt" root@NasOMV:~#
Code
Alles anzeigenroot@NasOMV:~# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 metadata=1.2 name=NasOMV:BigData UUID=791d5068:980fb9e9:3729dd39:f33f2a59 # instruct the monitoring daemon where to send mail alerts MAILFROM rootroot@NasOMV:~#
Coderoot@NasOMV:~# mdadm --detail --scan --verbose INACTIVE-ARRAY /dev/md0 num-devices=8 metadata=1.2 name=NasOMV:BigData UUID=791d5068:980fb9e9:3729dd39:f33f2a59 devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh root@NasOMV:~#
Code
Alles anzeigenroot@NasOMV:~# mdadm --examine /dev/sdf mdadm: No md superblock detected on /dev/sdf. root@NasOMV:~# mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 791d5068:980fb9e9:3729dd39:f33f2a59 Name : NasOMV:BigData (local to host NasOMV) Creation Time : Mon Feb 10 12:34:39 2020 Raid Level : raid6 Raid Devices : 9 Avail Dev Size : 23437515776 (11175.88 GiB 12000.01 GB) Array Size : 82031280128 (78231.13 GiB 84000.03 GB) Used Dev Size : 23437508608 (11175.88 GiB 12000.00 GB) Data Offset : 254976 sectors Super Offset : 8 sectors Unused Space : before=254888 sectors, after=7168 sectors State : clean Device UUID : bbc608da:855ea627:12479003:ccaa105b Internal Bitmap : 8 sectors from superblock Update Time : Mon Jun 1 17:23:08 2020 Bad Block Log : 512 entries available at offset 72 sectors Checksum : bacf6102 - correct Events : 30148 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 8 Array State : AAAAAAAAA ('A' == active, '.' == missing, 'R' == replacing) root@NasOMV:~#
The code below are make with the new hard drive installed.
Coderoot@NasOMV:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdi[8](S) sda[5](S) sde[3](S) sdh[7](S) sdg[6](S) sdc[1](S) sdb[0](S) sdd[2](S) 93750063104 blocks super 1.2 unused devices: <none> root@NasOMV:~#
Code
Alles anzeigenroot@NasOMV:~# blkid /dev/nvme0n1p1: UUID="40DD-D47C" TYPE="vfat" PARTUUID="b3fb1408-064f-4a09-ba33-4c0dfc46d0d7" /dev/nvme0n1p2: UUID="d9e77154-7735-4488-a782-083dcb89d261" TYPE="ext4" PARTUUID="fa9a2270-ed17-4caa-af92-e29537716c05" /dev/nvme0n1p3: UUID="78e32ede-6f24-47f3-b707-b866dada9262" TYPE="swap" PARTUUID="64150c69-ad92-45ff-bc9e-d1f659ac6034" /dev/nvme0n1p4: LABEL="Cache" UUID="ef52ac66-19d2-4ef2-aee2-de73e95c10c4" TYPE="ext4" PARTUUID="695f402c-954a-4df8-bc87-92cfe7e5846c" /dev/sdb: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="590a37a4-e4d1-d783-6890-d903ac5e7b36" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sda: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="bbc608da-855e-a627-1247-9003ccaa105b" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdc: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="a5a19393-bb17-8349-11c7-b65fbd529cfb" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sde: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="f90fb80c-0e01-9857-e9fb-ff3869a64cac" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdd: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="ccb17d79-74c2-5c80-639c-340c8972b850" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdg: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="b96039d9-95e9-24b0-efc2-0add7e7528c5" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdh: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="1a416a5c-f4f1-bd2b-0943-fb6be60054f1" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/sdi: UUID="791d5068-980f-b9e9-3729-dd39f33f2a59" UUID_SUB="ae8f8f25-8ac6-8d65-ae97-05c27fb8c418" LABEL="NasOMV:BigData" TYPE="linux_raid_member" /dev/nvme0n1: PTUUID="0fabdc47-ea67-4c86-993f-72733b6b0e5e" PTTYPE="gpt" root@NasOMV:~#
Code
Alles anzeigenroot@NasOMV:~# cat /etc/mdadm/mdadm.conf # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 metadata=1.2 name=NasOMV:BigData UUID=791d5068:980fb9e9:3729dd39:f33f2a59 # instruct the monitoring daemon where to send mail alerts MAILFROM rootroot@NasOMV:~#
There is something that i can do?
Thanks you very much
-
Hello.
Now the problem it's gone. I think the was the lazy init process
-
Test SMART completed without any error.
-
I don't know, but what you have to remember in a raid setup that it's written across all drives, most home users don't need to use a raid option there other ways to store data. Raid options are driven by hardware vendors and most users think that that is the norm, if someone wants to use a software raid they need to understand how it works and how to recover from it.
What you have set up is something I would never ever consider, nine drives in a single array, mergerfs and snapraid would be a better choice.
You say the nas is noisy, I take it that is from the drives, if so it's something I experienced some time ago, but that was from old hardware and older drives.
I understand your point of view, but when someone it's newbie like me the error are easy to do.
the only problem it's that the nas now it's noise because the drivers write every 5 second, before to expand the raid the nas was silent and the drives going to sleep.
I think now it's to late to make change
-
Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)
According to what I have read this will stop, but due to the number of drives in the array it could take some time.
Sorry, but one more question, it's normal that the block writed are ever the same?
-
Ok if you search for jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors) from the log output, likewise ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors) also md0_raid6(270): WRITE block 16 on sdf (8 sectors)
According to what I have read this will stop, but due to the number of drives in the array it could take some time.
Thanks you so much.
I hope this will end soon
I try to wait a couple of days ans see what happen :3
Now are 2 days that this thing doesen't stop.
-
Ok, but I believe both Plex and Emby will poll the share's whilst they are running.
Even with all dockers disabled the problem with the disks persists
-
The output from both, mdstat tells you the raid is active and not rebuilding, fsck shows there are no file system errors on the array, the next is to confirm the state of each drive. Any media servers running i.e. Plex, Emby.
I have Plex running in a docker but now all my docker are stopped.
Thanks for your help
-
That always makes me when that is quoted, you need to check under Storage -> Disks -> SMART and run a Long self test on each drive and review the output.
Thanks, make a long test now
When you say formatted I assume you mean wipe and judging by that log you now have 9 drives in that raid 6.
What's the output of cat /proc/mdstatHere is the output:
Code
Alles anzeigenroot@NasOMV:~# cat /prox/mdstat cat: /prox/mdstat: No such file or directory root@NasOMV:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid6 sdf[4] sdg[6] sdc[1] sdh[7] sdi[8] sda[5] sdb[0] sde[3] sdd[2] 82031280128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [9/9] [UUUUUUUUU] bitmap: 2/88 pages [8KB], 65536KB chunk unused devices: <none> root@NasOMV:~#
What you would do is to run fsck /dev/md0 what is fsck
Here is the output:
-
Not necessarily, if you google the information from the log it has something to do with the recent raid expansion, what is the condition of the drives, i.e. any bad sectors being referenced in smart, any smart errors particularly 5, 197, 198. You might need to run fsck on the raid itself.
Hello and Thanks
I have checked and, i think, there is no SMART error.
The 4 disks used to expand the raid was used in a Synology NAS and before expand the OMV raid i have only formatted the disk in rapid mode.
fsck i have no idea how to use it, i'm a completly noob with OMV
-
Can be a valid option to reinstall OMV?
This problem it's really annoying because i have the nas in my room and the noise it's frustrating me.
-
I'm afraid I can't help you, we hope that one of our experts will have an answer
But Check the warning at the bottom of this page
Thanks you, i hope someone can help me
-
You can log hard drive access under Linux:
The output then ends up under:
or
and this is how you switch it off again:
Thanks you so much.
I have found this:
Apr 15 17:36:14 NasOMV kernel: [109612.251306] DBENGINE(7791): WRITE block 14461408 on nvme0n1p2 (8 sectors)
Apr 15 17:36:14 NasOMV kernel: [109612.251311] DBENGINE(7789): WRITE block 35534976 on nvme0n1p2 (96 sectors)
Apr 15 17:36:14 NasOMV kernel: [109612.253023] DBENGINE(7792): WRITE block 35535072 on nvme0n1p2 (80 sectors)
Apr 15 17:36:14 NasOMV kernel: [109612.253042] DBENGINE(7790): WRITE block 35528448 on nvme0n1p2 (8 sectors)
Apr 15 17:36:14 NasOMV kernel: [109612.253198] DBENGINE(7791): WRITE block 35535152 on nvme0n1p2 (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109613.458512] ext4lazyinit(830): WRITE block 79633063168 on md0 (1024 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030517] md0_raid6(270): WRITE block 16 on sdf (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030533] md0_raid6(270): WRITE block 16 on sdg (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030538] md0_raid6(270): WRITE block 16 on sdc (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030542] md0_raid6(270): WRITE block 16 on sdh (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030546] md0_raid6(270): WRITE block 16 on sdi (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030550] md0_raid6(270): WRITE block 16 on sda (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030555] md0_raid6(270): WRITE block 16 on sdb (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030559] md0_raid6(270): WRITE block 16 on sde (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.030563] md0_raid6(270): WRITE block 16 on sdd (8 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074894] md0_raid6(270): WRITE block 8 on sdf (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074928] md0_raid6(270): WRITE block 8 on sdg (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074940] md0_raid6(270): WRITE block 8 on sdc (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074950] md0_raid6(270): WRITE block 8 on sdh (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074961] md0_raid6(270): WRITE block 8 on sdi (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074972] md0_raid6(270): WRITE block 8 on sda (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074982] md0_raid6(270): WRITE block 8 on sdb (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.074992] md0_raid6(270): WRITE block 8 on sde (1 sectors)
Apr 15 17:36:16 NasOMV kernel: [109614.075002] md0_raid6(270): WRITE block 8 on sdd (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994570] jbd2/md0-8(828): WRITE block 35153047312 on md0 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994577] jbd2/nvme0n1p2-(351): WRITE block 2922296 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994593] jbd2/nvme0n1p2-(351): WRITE block 2922304 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994598] jbd2/nvme0n1p2-(351): WRITE block 2922312 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994601] jbd2/nvme0n1p2-(351): WRITE block 2922320 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994605] jbd2/nvme0n1p2-(351): WRITE block 2922328 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994608] jbd2/nvme0n1p2-(351): WRITE block 2922336 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994611] jbd2/nvme0n1p2-(351): WRITE block 2922344 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994614] jbd2/nvme0n1p2-(351): WRITE block 2922352 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994618] jbd2/nvme0n1p2-(351): WRITE block 2922360 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994621] jbd2/nvme0n1p2-(351): WRITE block 2922368 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994646] md0_raid6(270): WRITE block 16 on sdf (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994682] md0_raid6(270): WRITE block 16 on sdg (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994695] md0_raid6(270): WRITE block 16 on sdc (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994706] md0_raid6(270): WRITE block 16 on sdh (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994716] md0_raid6(270): WRITE block 16 on sdi (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994727] md0_raid6(270): WRITE block 16 on sda (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994738] md0_raid6(270): WRITE block 16 on sdb (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994748] md0_raid6(270): WRITE block 16 on sde (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.994759] md0_raid6(270): WRITE block 16 on sdd (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109614.997635] jbd2/nvme0n1p2-(351): WRITE block 2922376 on nvme0n1p2 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195520] md0_raid6(270): WRITE block 8 on sdf (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195555] md0_raid6(270): WRITE block 8 on sdg (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195569] md0_raid6(270): WRITE block 8 on sdc (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195580] md0_raid6(270): WRITE block 8 on sdh (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195590] md0_raid6(270): WRITE block 8 on sdi (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195601] md0_raid6(270): WRITE block 8 on sda (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195611] md0_raid6(270): WRITE block 8 on sdb (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195622] md0_raid6(270): WRITE block 8 on sde (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.195633] md0_raid6(270): WRITE block 8 on sdd (1 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.244024] jbd2/md0-8(828): WRITE block 35153047320 on md0 (8 sectors)
Apr 15 17:36:17 NasOMV kernel: [109615.244576] jbd2/md0-8(828): WRITE block 35153047328 on md0 (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650517] md0_raid6(270): WRITE block 16 on sdf (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650554] md0_raid6(270): WRITE block 16 on sdg (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650567] md0_raid6(270): WRITE block 16 on sdc (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650578] md0_raid6(270): WRITE block 16 on sdh (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650588] md0_raid6(270): WRITE block 16 on sdi (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650598] md0_raid6(270): WRITE block 16 on sda (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650608] md0_raid6(270): WRITE block 16 on sdb (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650618] md0_raid6(270): WRITE block 16 on sde (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.650628] md0_raid6(270): WRITE block 16 on sdd (8 sectors)
Apr 15 17:36:18 NasOMV kernel: [109615.792952] php-fpm7.0(28196): dirtied inode 2193 (sess_haoff5rkovamub4s5evopvfon5) on nvme0n1p2
Apr 15 17:36:18 NasOMV kernel: [109615.793415] php-fpm7.0(28196): dirtied inode 2193 (sess_haoff5rkovamub4s5evopvfon5) on nvme0n1p2
Apr 15 17:36:18 NasOMV kernel: [109616.284857] md0_raid6(270): WRITE block 8 on sdf (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284878] md0_raid6(270): WRITE block 8 on sdg (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284887] md0_raid6(270): WRITE block 8 on sdc (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284895] md0_raid6(270): WRITE block 8 on sdh (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284902] md0_raid6(270): WRITE block 8 on sdi (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284910] md0_raid6(270): WRITE block 8 on sda (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284917] md0_raid6(270): WRITE block 8 on sdb (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284925] md0_raid6(270): WRITE block 8 on sde (1 sectors)
Apr 15 17:36:18 NasOMV kernel: [109616.284932] md0_raid6(270): WRITE block 8 on sdd (1 sectors)
Apr 15 17:36:19 NasOMV kernel: [109617.298543] kworker/u12:2(28111): WRITE block 6077520 on nvme0n1p2 (8 sectors)
Apr 15 17:36:20 NasOMV kernel: [109617.554522] ext4lazyinit(830): WRITE block 79633064192 on md0 (1024 sectors)
What can be?
-
can someone
help me?
-
good morning,
I have a problem with disk usage after expanding the RAID and the File System (via WebGui).
now, as you can see in the image, the disk is always in constant use and I can't understand why.
I'm new to OMV and I don't really know what to do.
Before expanding the raid the disks goes to spinndown but now no more.
do you have any advice?
Please help me
Thanks so much.
-
Hi
Was gonna put a response to this earlier, but I'm a newb and wanted to see the debate first... always wanting to learn from the power users! :]My motherboard selection journey took me through a few basics... the one thing I learned is that a server is a different beast to a workstation, or PC (esp gaming)... the main issue being ECC (error correcting) RAM and the number of SATA ports (etc)
Balancing this with power consumption and aquisition cost! [the NAS rated drives are pretty much a constant anyways] - CPU power and RAM are not so critical (from what I learned here)
> bottom line; I skipped the PC/x86 route and went to the Helios4 (now on preorder, waiting for production run!) see here... because, reasons! (ARM is the future y'all!)
> while I'm waiting for the Helios to arrive, I'm using an old notebook PC (10 yrs old, works fine with OMV) and USB connected external drives... although I could do a swap-out for the optical drive inside the notebook if I really could be bothered! (which would use the SATA connection)
> the notebook can have any old 2.5" HDD or a small capacity SSD to boot off -or- get into the BIOS to boot off a USB stick (not too hard)... then all the media management (eg PLEX, Emby...) *plugins* are available via Docker (see the videos + the getting started threads) ...here... and you manage OMV from whatever you use as your 'daily driver' (or any WiFi device that can run a web browser)
Good Luck! Sharing is caring!
Thanks for your tips
If i will make 2 different system, one for storage and one for Plex, when plex make a transcoding, the lan can make problems? I mean bandwidth problems
-
I prefer to re-encode in advance instead of on-the-fly. It makes much better image quality possible. I have a few crazy high bitrate 4K HEVC videos. That is the only media I can't play over wifi directly to my clients.
I had Emby on one of my HC2s re-encode it to 1080p x264 8Mbps with H.264 CRF 20, H.264 preset slower. I just left clicked on the video and selected convert. Not sure how long it took, but it was done a few days later when I was ready to watch it. Impossible to see any difference from the original media at normal viewing distance on a 1080p screen.
I have Emby (official armhf docker) set to use ffmpeg with v4l2 when transcoding and re-encoding on my HC2s. Not sure if that helps.
About cost. The expensive thing is the drives. In comparison the NAS is cheap. At least if you go for big NAS drives. By using several smaller units it is easier to dimension right and expand as needed. With the monolith approach you need to overprovision. Either you have empty drive bays or you have half empty hdds.
Also 10 GbE may help with bandwidth for backups and restores. But it is expensive and using several 1 GbE units connected to a switch you can easily achieve much higher simultaneous combined bandwidth than 10 Gb/s.
About size. I have 6 HC2s on a bookshelf. Takes up half the shelf.
https://forum.openmediavault.o…ent/10908-c5-running-jpg/
About backups. You most likely need at least one extra unit for backups, with similar storage capacity. Perhaps more. Perhaps at different locations. Then you already have a multi node system. How do you handle backups now? If you expand the data storage you most likely have to expand the backup storage as well?
at the moment i have a backUp on 3x8Tb external Driver and a secure backup on amazon aws, but on Amazon i store only the most important things.About re-encoding at the moment its not possible due the few free Tb on my NAs