Hi all i am coming from xpenology server ut was boring.Now i installed omv on a new ssd busy installing updates and plugins.I want to run it as downloader with sabnzbd couchpotato sonarr headphones etc.Also have 26 Tb for storage divide on 11 hdd'sI can see my hdd's in the gui but don't know to get my disks online as one big storage place and make shared files later on it when it's online.It seems to me that omv sees the raid synolgy as an raid and maybe i can import the disks without data loss.Any advice appreciated and love to be it quick advice as well.
For the mods i seem not to be able to put in/on the right label to the thread.
Synology raid or.....?
-
- OMV 1.0
- ikkeenjij36
-
-
If the raid array shows up in the Raid tab and the filesystem on the raid array shows up in the filesystem tab, you should be about to mount the filesystem and create shared folders.
-
Well it shows up in the raid tab as 3 systems and the only thing i can do is repair it.
In the filesystemtab there is no raid array availeble to mount it. -
What is the output of:
cat /proc/mdstat
blkid -
Here you go:
Code
Alles anzeigenroot@OMVSERVER:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md125 : active (auto-read-only) raid1 sdm7[0] sdn7[1] 976742784 blocks super 1.2 [2/2] [UU] md126 : active (auto-read-only) raid5 sdk5[0] sdg5[10] sdf5[9] sdh5[8] sdc5[7] sdb5[6] sde5[5] sdn5[4] sdl5[11] sdm5[2] sdj5[1] 9720276480 blocks super 1.2 level 5, 64k chunk, algorithm 2 [11/11] [UUUUUUUUUUU] resync=PENDING md127 : active (auto-read-only) raid5 sdk6[0] sdg6[9] sdf6[8] sdh6[7] sdc6[6] sdb6[5] sde6[4] sdn6[3] sdm6[2] sdj6[1] 8790685056 blocks super 1.2 level 5, 64k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none>
-
And here is output of bklid:
Code
Alles anzeigenroot@OMVSERVER:~# blkid /dev/sdi1: SEC_TYPE="msdos" UUID="4CFC-FC86" TYPE="vfat" /dev/sda1: UUID="c6c999e9-f2b9-4e32-b514-d730120a4d11" TYPE="ext4" /dev/sda5: UUID="b69e62be-bb10-4047-8056-c7668af4a466" TYPE="swap" /dev/sr0: LABEL="OpenMediaVault" TYPE="iso9660" /dev/sdj1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdj2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdj5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="1859d76b-96ad-ddd4-36d7-b8c7b7a27df9" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdj6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="06c47e7b-d6e4-d012-01eb-e2f23e3e01df" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdn1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdn2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdn5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="b035e8c0-147b-60a4-6a57-d880950bfd87" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdn6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="12ac95be-cb65-06e9-0cf5-f7ebccfcfcf7" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdn7: UUID="efc73beb-5207-8bdd-a88d-3a15c07bd839" UUID_SUB="08258116-fc51-0a33-8662-df11c9c446ba" LABEL="synoserver:4" TYPE="linux_raid_member" /dev/sdm1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdm2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdm5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="86ae6bc5-9998-efd5-decd-8c3fe326b63f" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdm6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="502ceb4a-b757-4573-d196-cbf723483617" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdm7: UUID="efc73beb-5207-8bdd-a88d-3a15c07bd839" UUID_SUB="c1e8b676-b7a2-1c59-472f-3f5b9385bed1" LABEL="synoserver:4" TYPE="linux_raid_member" /dev/sdk1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdk2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdk5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="7554a8c0-7c1a-f4de-e7b5-63d96f768725" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdk6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="921d9219-b155-efd9-2970-8edb719fa495" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdl1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdl2: UUID="70d18cdb-ea13-53ab-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdl5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="b88e9056-b160-bd99-95bd-4723759ef9da" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdg1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdg2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdg5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="0e978ddc-377d-6938-cba6-e012ec5959d2" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdg6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="36b2c8fe-e0d0-a2ae-982a-01741c327ab5" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdc1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdc2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdc5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="be18e386-319b-439a-6b74-cfeee766f99e" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdc6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="4e74cfe7-2ab6-c68b-25f0-170985420de4" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sde1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sde2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sde5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="63b6f467-6158-b0d5-1883-8d64a77324ea" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sde6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="f22566b8-8ee7-11cb-2bf4-f8b418219df7" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdf1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdf2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdf5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="6528d536-54b7-80c8-b782-0b835c5a3dd1" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdf6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="149bd4ea-263f-4ea0-ea61-0e9ac463377e" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdh1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdh2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdh5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="975f99b1-8fe3-c2d0-6e01-28bfb79d991e" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdh6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="0b8c8f7a-b301-eee9-ff3d-874af44771bc" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/sdb1: UUID="6ef89a86-c5e5-1d0a-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdb2: UUID="6ada6b88-17e9-1dee-7054-cb0ce28b11d3" TYPE="linux_raid_member" /dev/sdb5: UUID="94a53350-b407-1c4e-7c05-0d7cc0219d8b" UUID_SUB="d9074638-d3e6-1949-ff1e-3120088090bc" LABEL="synoserver:2" TYPE="linux_raid_member" /dev/sdb6: UUID="6cfdefd9-e5d1-de89-337b-3b2c9a6b30df" UUID_SUB="b405ce8d-1b90-135e-c7c7-a353e2e85845" LABEL="synoserver:3" TYPE="linux_raid_member" /dev/md127: UUID="kPOF2B-zXz8-KaIp-NI0m-3srq-iClW-ZLQjrc" TYPE="LVM2_member" /dev/md126: UUID="DtftOy-TPZz-jQFu-fzuJ-a8F7-WS5Q-ca9qle" TYPE="LVM2_member" /dev/md125: UUID="hV4zYs-3b4P-jugT-CuYx-Kr3Z-yOpS-z1eKpF" TYPE="LVM2_member" /dev/mapper/vg1000-lv: LABEL="1.42.6-4493" UUID="bbaf3690-d7cd-41b9-92bd-68d3e19aeba9" TYPE="ext4" /dev/sdd1: LABEL="DATA" UUID="845f75b9-ec1e-479e-aad2-69480573e82a" TYPE="ext4"
-
If you tell me how to make just one big storage would be great.
I'd would like to get it running before bed time is coming?lol
By teh way how long will it take to rebuild all or just making one data storage? -
You are using lvm on top of raid. I think you need to install the lvm plugin and add it. I don't use lvm so I'm not sure what further steps you need to take.
-
Well i hope then there is someone else who can,lvm plugin is installed.
I'd just like to know when i repair it is it possible then that i can make a raid of all my disks and then mount it?
And will it take a long time to repair make a raid of it and then mount it?
Caus i like the interface a lot and also the plugins and i'd really like to get it up and running as my main server.
But if i have to wait days before the array is build it might not be the one for me.
Also already tested an other plugin but i'll make another plugin that failed on me but will make another topic for that.
So please anybody who can help me out asap lease shoot at me so ic an get it up and running before the working period of 7 days starts again for me. -
Well took the plunge wiped all disks and made an zfspool of it.
All is up and running.
But now my main thing.
After the zfsbuild i am left with 13.71TB of the 25Tb the disks are.
The pool is made in raidz2 with 9x2TB disks and 1x4TB and 1x3TB disk.
Don'mind to destroy the pool and start again to have as much storage possible,redundacy isn't important for me as it id only media on it and making backups on external disk for the important stuff.
Any help would be great.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!