Ok thanks for your help just the same
Posts by Rdborg
-
-
Ok at the moment i have only teh 1 good drive in the system apart from ssds for cache and other stuff
1
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>2 - Please note i tried to plug it into my windows machine to see if i can access the files trough a OMV VM or LINUX VM - Thats SDA1
/dev/sdb1: UUID="ec8d339d-b214-4adb-ae74-bcc1e4f38f73" TYPE="ext4" PARTUUID="369fb5b8-01"
/dev/sdb5: UUID="a09029cf-c270-4582-a28e-2e611da0fa1f" TYPE="swap" PARTUUID="369fb5b8-05"
/dev/sdc1: LABEL="SSDStorage" UUID="2dcf4355-8d2c-4cc8-aa8e-d5062f8cc118" TYPE="ext4" PARTUUID="a6eb7aee-6a18-43b0-ad82-0a437da10e44"
/dev/sda1: PARTLABEL="Microsoft reserved partition" PARTUUID="ee21cbc9-ccd6-4ea7-bec6-b056bbd43073"3
Partition 1 does not start on physical sector boundary.
Disk /dev/sdb: 59.6 GiB, 64023257088 bytes, 125045424 sectors
Disk identifier: 0x369fb5b8
Disk /dev/sdc: 119.2 GiB, 128035676160 bytes, 250069680 sectors
Disk identifier: F38DD706-EA60-4B96-ADDA-737642914B47
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Disk identifier: 657CD890-819D-4604-906F-250F366D07E14
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
## by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes# automatically tag new arrays as belonging to the local system
HOMEHOST <system># definitions of existing MD arrays
INACTIVE-ARRAY /dev/md0 metadata=1.2 name=AlphaNAS:Storage UUID=9e5f4055:1fabb27 3:b708769f:fbe031305
mdadm: Unknown keyword INACTIVE-ARRAY
-
Its a 1 year old setup have all up and running and i removed the bad drive by taking out.. normal behaviour would be that raid still shows but as degraded not disappears.. nas is a custom built pc with 4 drives on sata ports 2 ssd and 2 hdd (one of which is the bad smart failed one)
Is there any way i can remount the 1 good drive and make it work and show as a degraded raid or as a single normal drive... or possibly plug t into another pc and recover the data?
-
Basically i have a small OMV Nas with 2 Seagate Iron Wolf 4Tb in raid 1 on OMV, one of these was failing according to SMART so i decided to remove it and send it back as its still new.. once i booted back up my raid was gone... i tried to put in the drive back and still the same... if i try make the raid again it does not show the 2 drives not even the working one but they do show up in the disks section... trying to make a new file system will erase the good drive as that seems like the only way to make it come back online.. any advise please
-
Hi Guys
Just did a fresh install of OMV4 on my new server hardware.
Have 4x2TB WD Red and i have created a RAID10
When i go to file system to make one i press create, choose name and file system and all is well it starts making it.. when it gets to end it says DONE no errors come up.
I press the close button on the small popup screen and it shows in here still shows as initializing.. then after a few seconds it disappears completely.. i did a reboot didn't show so i tried again and the same happens... any suggestions please?
Thanks
Ryan
-
Hi guys
I was wonderin i have an omv v3 install running for a fee months have a few question
I have 2 x 2tb drives in mirror raid at the moment
If i add a 3rd 2 tb drive can i upgrade the raid to raid 5 from mirror raid 1 and also is itpossible the new drive is a different brand or model tothe other two but has same capacity
Thanks
Ryan