Hi all currently running xpenology with 9x2TB wdearx drive 1x4TB wd red and 1x3TB wd red. and having an ocz vertex3 60GB ssd where i want to put the os(omv) on.
Currently my disks are configured as shr,i believe raid5?
If i start with omv is it possible to keep my data or start from scratch is better?
I don't mind dl all the stuff again.
And how long can it ttake for omv to build my volume again with all data disks atached?
I dont want to wait more then 24 hours before my storage is build again.
Thanks for all youre help.
Restarting with omv?
-
- OMV 1.0
- ikkeenjij36
-
-
I'm guessing xpenology uses a linux filesystem. I would boot systemrescuecd to see if it recognizes the filesystem. If it does, then OMV will too. If that is the case, you can use your disks are they are. Just unplugin them, install OMV, plug them back in, mount and create shared folders and users.
-
Shr is regnized in OMV. I migrated from 3x6 tb in shr (synology hibraid raid) to 3x6 tb raid 5 without loosing data. So in my case shr is actually a normal raid 5 below the synology software layer.
But i see you have multiple disk sizes in SHR. It could be that it is not only raid 5 but also maybe other raid systems combined. The synology software layer makes it vissible as one raid system.
-
Ok so probably the best is to start from scratch no prob for me.
But how can i make us of all my disk storage then? -
If i was you i would try it. You can always start from scratch
-
Ok will probably give it a shot tomorrow i think,do i just have to mount after omv is installed on the os/boot disk?
-
In my case the filesystem was vissible from start. I only needed to moun the filesystem and needed to add the shared folders in OMV (uses the excact same name, it is case sensitive). In synology there where also 2 x RAID 1 file system (for DSM and SWAP), these are not vissible in DSM. I deleted them manually in the cli
But i was thinking, if i am not mistaking, you can go to the cli in your xpenology system and give the outcome of cat /proc/mdstat it will say your linux raid system.
-
Code
Alles anzeigenSYNOSERVER> cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md5 : active raid1 sde8[0] sdj8[1] 976742784 blocks super 1.2 [2/2] [UU] md4 : active raid5 sdc7[0] sdt7[9] sds7[8] sdr7[7] sdq7[6] sdp7[5] sdo7[4] sdj7[3] sde7[2] sdd7[1] 8790685056 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU] md3 : active raid5 sdc6[0] sdt6[10] sds6[9] sdr6[8] sdq6[7] sdp6[6] sdo6[5] sdj6[4] sdf6[3] sde6[2] sdd6[1] 9181376640 blocks super 1.2 level 5, 64k chunk, algorithm 2 [11/11] [UUUUUUUUUUU] md2 : active raid5 sdc5[0] sdu5[11] sdt5[10] sds5[9] sdr5[8] sdq5[7] sdp5[6] sdo5[5] sdj5[4] sdf5[3] sde5[2] sdd5[1] 592689152 blocks super 1.2 level 5, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU] md1 : active raid1 sdc2[0] sdd2[1] sde2[2] sdf2[3] sdj2[4] sdo2[5] sdp2[6] sdq2[7] sdr2[8] sds2[9] sdt2[10] sdu2[11] 2097088 blocks [16/12] [UUUUUUUUUUUU____] md0 : active raid1 sdc1[0] sdd1[1] sde1[2] sdf1[3] sdj1[4] sdo1[5] sdp1[11] sdq1[10] sdr1[9] sds1[8] sdt1[7] sdu1[6] 2490176 blocks [16/12] [UUUUUUUUUUUU____]
Hope this helps?
And maybe give me advice on how to proceed from here? -
First of all i have (very) minor linux skills. I believe md0 and md1 are your OS (DSM) and Swap. This means md2, md3, md4 & md5 are your storage systems. These are vissible as 1 file system in DSM due the SHR layer. But in OMV these will be vissible as 4 different filesystems and not as 1.
-
Well i wil start from scratch again just to be on the safe side of it all.
How long will it take to make my raid or zfspool?
I know from an other os i tried that the pool/disks were available instantly as zfspool raidz2.
Can this be the sam in omv ass well?
Main issue for me is i don't want to wait for 48 hours before i can use the disks. -
With mdadm raid, they will not be available instantly especially with a raid of that size. I don't see any reason to start from scratch though. The drives are using mdadm raid already. Just mount your filesystems in the web interface.
-
Well i just found out i got my ssd in the shr raid as well so i think it will be degraded array?
-
That is a problem unless you have another drive to replace it with.
-
Well not atm laying an xtra ssd laying around so have to start from zero.
Now to find the best way to get my disks available for the server as quickly as possible -
Maybe it is an idea to make an zfspool by another way and after that i can mount it in omv?
So at least i can use the pool instantly? -
-
Ok so i stil have to wait about 48 hours before data disks will be usable?
As i want to configure it as one whole big data media server for my household so the preffered plugins are also sab cp sb plex and so on.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!