Similar issues only occurring with 4.13: https://ubuntuforums.org/showthread.php?t=2376227&page=3
Which drives are these two?
/dev/sde & /dev/sdd
Similar issues only occurring with 4.13: https://ubuntuforums.org/showthread.php?t=2376227&page=3
Which drives are these two?
/dev/sde & /dev/sdd
/dev/sde & /dev/sdd
C'mon. Which drives are these two? The aforementioned thread deals with WD30EZRX so which drives are you using?!
And another WD30EFRX becoming invisible with kernel 4.13: Fresh installation of OMV4, unable to mount old XFS drive, with data, that works on OMV2
I've misunderstand your previous question. Take a look:
# smartctl -a /dev/sde
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.0-0.bpo.1-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Green
Device Model: WDC WD20EZRX-00D8PB0
Serial Number: WD-WCC4M0ZYTJH8
LU WWN Device Id: 5 0014ee 20b50d5a6
Firmware Version: 80.00A80
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Tue Dec 19 22:40:05 2017 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
smartctl -a /dev/sdd
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.13.0-0.bpo.1-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Green
Device Model: WDC WD20EZRX-00D8PB0
Serial Number: WD-WCC4M5LES82D
LU WWN Device Id: 5 0014ee 2b5fc2975
Firmware Version: 80.00A80
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Tue Dec 19 22:42:42 2017 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Alles anzeigen
Seems to be another hit.
Is it advisable to down grade to an earlier kernel as the issue started with 4.13?
Is it advisable to down grade to an earlier kernel as the issue started with 4.13?
This should workaround the problem according to those bug reports that took a deeper look. But of course it would be better if the problem would be identified and solved with recent kernel versions so the fix can be backported to 4.13 later.
Threads/reports to watch:
Besides that I've no idea whether a workaround in OMV is possible (not relying on blkid output -- but if I understood the bug reports correctly the problem is more severe than just cosmetical issues with one tool)
Another victim of WD20EFRX and kernel 4.14 here. My RAID1 is becoming invisible in the File Systems menu.
Edit: I've downgraded the kernel to 4.9. I'm still having the issue.
Just upgraded last night from a stable 3.x build (running for a LONG time with no issues) to 4.1.0.1).
As others have stated - my md0 partition is not mounted or recognized.
It is comprised of Qty 4 - WDC-WD30EFRX-68E drives.
Output of udevadm info --query=property --name=/dev/md0
DEVLINKS=/dev/disk/by-id/md-uuid-de9ec887:074a453e:f6ad3e13:8e8d12a6 /dev/disk/by-id/md-name-OMV2:HULK
DEVNAME=/dev/md0
DEVPATH=/devices/virtual/block/md0
DEVTYPE=disk
MAJOR=9
MD_DEVICES=4
MD_DEVICE_sdb_DEV=/dev/sdb
MD_DEVICE_sdb_ROLE=0
MD_DEVICE_sdc_DEV=/dev/sdc
MD_DEVICE_sdc_ROLE=1
MD_DEVICE_sdd_DEV=/dev/sdd
MD_DEVICE_sdd_ROLE=2
MD_DEVICE_sde_DEV=/dev/sde
MD_DEVICE_sde_ROLE=3
MD_LEVEL=raid5
MD_METADATA=1.2
MD_NAME=OMV2:HULK
MD_UUID=de9ec887:074a453e:f6ad3e13:8e8d12a6
MINOR=0
SUBSYSTEM=block
SYSTEMD_WANTS=mdmonitor.service
TAGS=:systemd:
USEC_INITIALIZED=6271534
output of lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 37.3G 0 disk
|-sda1 8:1 0 35.7G 0 part /
|-sda2 8:2 0 1K 0 part
`-sda5 8:5 0 1.6G 0 part [SWAP]
sdb 8:16 0 2.7T 0 disk
`-md0 9:0 0 8.2T 0 raid5
sdc 8:32 0 2.7T 0 disk
`-md0 9:0 0 8.2T 0 raid5
sdd 8:48 0 2.7T 0 disk
`-md0 9:0 0 8.2T 0 raid5
sde 8:64 0 2.7T 0 disk
`-md0 9:0 0 8.2T 0 raid5
Has anyone come up with a fix or this???
The information appears to still be there - and I do not want to risk losing everything - - any thoughts?
results from "fsck /dev/md0"
fsck from util-linux 2.29.2
e2fsck 1.43.4 (31-Jan-2017)
HULK: clean, 448820/274702336 files, 554503719/2197601280 blocks
TIA
George.
If I reinstalled 3.x from scratch - do you think it would re recognize the old raid array (md0)? Or does anyone have any thoughts how to fix this??? Really getting desperate to get my files back.
Thanks
George
You have a backup right? I would try reinstall with the drives in place. Be sure to let the install reboot on it's own, don't force it. Seems there are scripts running that if they don't finish can cause random problems.
If you don't have space to do a backup and it is raid 1, you should be able to do the install with only one drive in the machine. Once it is up and working you can add the other drive back.
Good luck
It is Raid 5 with 4 drives. I do have a backup (I have this sync to another NAS for most of the files - but not all - - I know, bad planning on my part)
I reinstalled with version 3.x - Array was still there - re-created SMB shares and added them as shared folders - and can access everything.
Thanks
Is this issue fixed on 4.15+ kernel versions? Can anyone confirm?
Thanks
not resolved
Seagate IronWolf 4TB 3.5 Internal NAS HDD SATA 6Gb/s (ST4000VN008) same issue here.
I never resolved the issue.
I had a full back up to another NAS - and did a fresh install to 4.x and wiped and rebuilt zfs array.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!