RAID Filesystem is n/a

    • OMV 2.x
    • RAID Filesystem is n/a

      I build a RAID 6 with 11 connections as a Software RAID.
      7 SATA Mainboard-Connections extended with a HighPoint Rocket 640L-Card with 4 SATA-Connections.

      Maybe the Highpoint had an short Problem and the RAID lost in this moment the connection. But now the Highpoint looks still ok.
      The result:
      SMART only show the Temps from 7 HDs, but all HDs are listet.
      OMV wants to resync the system -but it hangs at 27%, only the estimatet time grows up.
      Because a software Restart dont works, I do a hard Reset.

      After rebooting :
      All HDs are physical present
      SMART shows the Temps from all HDs.
      -> But the Filesystem from the RAID is n/a
      -> And the RAID is not existing any more.

      "fdisk -l" say to all 11 HDs:
      "doesn't contain a valid partition table"

      To solve the Problem (click) I found a repair command for fixing the XFS Filesystem.
      I think it is possible, that omv dont do a right HD Shutdown, because of the hard Reset.
      But if I understand it right, this is required before I do repair commands, like this:

      Source Code

      1. xfs_check
      2. xfs_repair -n


      -But I've no plan to shutdown the HDs, also I've to much fear to lost my Data by the repair commands.
      Because my Linux- and english knowledge is minimal, I hope we can find a solution, to help a stupid german girl to save her omv!

      lG
      Petra
    • You need to fix your array before you can fix the filesystem. What is the output of: cat /proc/mdstat and complete output of fdisk -l
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Hallo,

      I only know, that Ive to fix anything :(

      Source Code

      1. /var$ cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. unused devices: <none>


      Source Code

      1. /var$ fdisk -l
      2. Disk /dev/sda doesn't contain a valid partition table
      3. Disk /dev/sdb doesn't contain a valid partition table
      4. Disk /dev/sdc doesn't contain a valid partition table
      5. Disk /dev/sdd doesn't contain a valid partition table
      6. Disk /dev/sde doesn't contain a valid partition table
      7. Disk /dev/sdf doesn't contain a valid partition table
      8. Disk /dev/sdg doesn't contain a valid partition table
      9. Disk /dev/sdi doesn't contain a valid partition table
      10. Disk /dev/sdh doesn't contain a valid partition table
      11. Disk /dev/sdj doesn't contain a valid partition table
      12. Disk /dev/sdl doesn't contain a valid partition table
      13. Disk /dev/sda: 6001.2 GB, 6001175126016 bytes
      14. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      15. Units = sectors of 1 * 512 = 512 bytes
      16. Sector size (logical/physical): 512 bytes / 4096 bytes
      17. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      18. Disk identifier: 0x00000000
      19. Disk /dev/sdb: 6001.2 GB, 6001175126016 bytes
      20. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      21. Units = sectors of 1 * 512 = 512 bytes
      22. Sector size (logical/physical): 512 bytes / 4096 bytes
      23. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      24. Disk identifier: 0x00000000
      25. Disk /dev/sdc: 6001.2 GB, 6001175126016 bytes
      26. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      27. Units = sectors of 1 * 512 = 512 bytes
      28. Sector size (logical/physical): 512 bytes / 4096 bytes
      29. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      30. Disk identifier: 0x00000000
      31. Disk /dev/sdd: 6001.2 GB, 6001175126016 bytes
      32. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      33. Units = sectors of 1 * 512 = 512 bytes
      34. Sector size (logical/physical): 512 bytes / 4096 bytes
      35. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      36. Disk identifier: 0x00000000
      37. Disk /dev/sde: 6001.2 GB, 6001175126016 bytes
      38. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      39. Units = sectors of 1 * 512 = 512 bytes
      40. Sector size (logical/physical): 512 bytes / 4096 bytes
      41. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      42. Disk identifier: 0x00000000
      43. Disk /dev/sdf: 6001.2 GB, 6001175126016 bytes
      44. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      45. Units = sectors of 1 * 512 = 512 bytes
      46. Sector size (logical/physical): 512 bytes / 4096 bytes
      47. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      48. Disk identifier: 0x00000000
      49. Disk /dev/sdg: 6001.2 GB, 6001175126016 bytes
      50. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      51. Units = sectors of 1 * 512 = 512 bytes
      52. Sector size (logical/physical): 512 bytes / 4096 bytes
      53. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      54. Disk identifier: 0x00000000
      55. Disk /dev/sdi: 6001.2 GB, 6001175126016 bytes
      56. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      57. Units = sectors of 1 * 512 = 512 bytes
      58. Sector size (logical/physical): 512 bytes / 4096 bytes
      59. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      60. Disk identifier: 0x00000000
      61. Disk /dev/sdh: 6001.2 GB, 6001175126016 bytes
      62. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      63. Units = sectors of 1 * 512 = 512 bytes
      64. Sector size (logical/physical): 512 bytes / 4096 bytes
      65. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      66. Disk identifier: 0x00000000
      67. Disk /dev/sdj: 6001.2 GB, 6001175126016 bytes
      68. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      69. Units = sectors of 1 * 512 = 512 bytes
      70. Sector size (logical/physical): 512 bytes / 4096 bytes
      71. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      72. Disk identifier: 0x00000000
      73. Disk /dev/sdk: 120.0 GB, 120034123776 bytes
      74. 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
      75. Units = sectors of 1 * 512 = 512 bytes
      76. Sector size (logical/physical): 512 bytes / 512 bytes
      77. I/O size (minimum/optimal): 512 bytes / 512 bytes
      78. Disk identifier: 0x00057615
      79. Device Boot Start End Blocks Id System
      80. /dev/sdk1 * 2048 224860159 112429056 83 Linux
      81. /dev/sdk2 224862206 234440703 4789249 5 Extended
      82. /dev/sdk5 224862208 234440703 4789248 82 Linux swap / Solaris
      83. Disk /dev/sdl: 6001.2 GB, 6001175126016 bytes
      84. 255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
      85. Units = sectors of 1 * 512 = 512 bytes
      86. Sector size (logical/physical): 512 bytes / 4096 bytes
      87. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      88. Disk identifier: 0x00000000
      Display All


      Source Code

      1. /var$ blkid
      2. /dev/sda: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="7b33f34d-b958-5faa-a54e-4ecd106d23fa" LABEL="omv:nas" TYPE="linux_raid_member"
      3. /dev/sdb: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="0e0ced35-00b3-a5c9-753e-1eee1feea5fd" LABEL="omv:nas" TYPE="linux_raid_member"
      4. /dev/sdc: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="dd2243f2-0dba-5925-9a0c-3d742e4b52f8" LABEL="omv:nas" TYPE="linux_raid_member"
      5. /dev/sdd: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="a14d4110-75c0-9a6d-ca22-e18dc4d29402" LABEL="omv:nas" TYPE="linux_raid_member"
      6. /dev/sde: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="5962f353-cb3f-0aa6-954a-0fe51e1e5f98" LABEL="omv:nas" TYPE="linux_raid_member"
      7. /dev/sdf: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="f1e564d4-ba55-1b5b-6151-dd0c3249362e" LABEL="omv:nas" TYPE="linux_raid_member"
      8. /dev/sdg: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="169b5dbf-198f-d0a8-9abe-264c58ac016d" LABEL="omv:nas" TYPE="linux_raid_member"
      9. /dev/sdi: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="48de70ba-5e31-9448-3424-13679f1841f9" LABEL="omv:nas" TYPE="linux_raid_member"
      10. /dev/sdh: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="4624bfb0-aecd-9e5e-f53a-9295123281b6" LABEL="omv:nas" TYPE="linux_raid_member"
      11. /dev/sdj: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="0fe7c41e-84e5-a703-d822-73790a841958" LABEL="omv:nas" TYPE="linux_raid_member"
      12. /dev/sdk1: UUID="cd01f2da-e6d3-442d-a224-5185f48349a9" TYPE="ext4"
      13. /dev/sdk5: UUID="274cdab8-eddd-4da2-9ff6-31b99e8c04ea" TYPE="swap"
      14. /dev/sdl: UUID="beb93a19-6fda-2702-d7a2-8213274b1c68" UUID_SUB="dbd5554a-90d8-d7f2-7e25-2dbfea2ed685" LABEL="omv:nas" TYPE="linux_raid_member"
      Display All


      omv-Screensnaps you find here (click)

      lG
      Petra

      The post was edited 1 time, last by Petra ().

    • Source Code

      1. /media$ mdadm --assemble --scan
      2. mdadm: /dev/md/nas assembled from 7 drives - not enough to start the array.
      3. mdadm: No arrays found in config file or automatically


      and

      Source Code

      1. /media$ cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. unused devices: <none>


      lG
      Petra
      Images
      • 1.gif

        86.63 kB, 1,040×585, viewed 297 times
      • 2.gif

        93.17 kB, 1,040×585, viewed 292 times
      • 3.gif

        107.31 kB, 1,040×585, viewed 277 times
      • 4.gif

        62.67 kB, 1,040×585, viewed 250 times
      • 5.gif

        67.86 kB, 1,040×585, viewed 279 times
    • Try this:
      mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdefghijl]
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • solved / gelöst?

      Oh ryecoaaron!
      Thank you so much! It looks as if the omv forgetfulness is over!

      Source Code

      1. /media$ mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdefgihjl]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      4. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
      5. mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
      6. mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.
      7. mdadm: /dev/sde is identified as a member of /dev/md127, slot 4.
      8. mdadm: /dev/sdf is identified as a member of /dev/md127, slot 5.
      9. mdadm: /dev/sdg is identified as a member of /dev/md127, slot 6.
      10. mdadm: /dev/sdh is identified as a member of /dev/md127, slot 7.
      11. mdadm: /dev/sdi is identified as a member of /dev/md127, slot 8.
      12. mdadm: /dev/sdj is identified as a member of /dev/md127, slot 9.
      13. mdadm: /dev/sdl is identified as a member of /dev/md127, slot 10.
      14. mdadm: forcing event count in /dev/sdg(6) from 264 upto 266
      15. mdadm: forcing event count in /dev/sdh(7) from 264 upto 266
      16. mdadm: forcing event count in /dev/sdi(8) from 264 upto 266
      17. mdadm: forcing event count in /dev/sdj(9) from 264 upto 266
      18. mdadm: clearing FAULTY flag for device 6 in /dev/md127 for /dev/sdg
      19. mdadm: clearing FAULTY flag for device 7 in /dev/md127 for /dev/sdh
      20. mdadm: clearing FAULTY flag for device 8 in /dev/md127 for /dev/sdi
      21. mdadm: clearing FAULTY flag for device 9 in /dev/md127 for /dev/sdj
      22. mdadm: Marking array /dev/md127 as 'clean'
      23. mdadm: added /dev/sdb to /dev/md127 as 1
      24. mdadm: added /dev/sdc to /dev/md127 as 2
      25. mdadm: added /dev/sdd to /dev/md127 as 3
      26. mdadm: added /dev/sde to /dev/md127 as 4
      27. mdadm: added /dev/sdf to /dev/md127 as 5
      28. mdadm: added /dev/sdg to /dev/md127 as 6
      29. mdadm: added /dev/sdh to /dev/md127 as 7
      30. mdadm: added /dev/sdi to /dev/md127 as 8
      31. mdadm: added /dev/sdj to /dev/md127 as 9
      32. mdadm: added /dev/sdl to /dev/md127 as 10
      33. mdadm: added /dev/sda to /dev/md127 as 0
      34. mdadm: /dev/md127 has been started with 11 drives.
      Display All


      Repair is in progress? Or?
      A very special thank for your looking along the much drive letters!
      You give me back faith in omv!

      But the filesystem is still n/a and RAID is pending.
      I ve todo anything to start the repair?

      Or I ve only to wait?
      Because there is no progress bar...


      lG
      Petra

      The post was edited 2 times, last by Petra ().

    • Glad it is working :) I wouldn't do anything to it until it is done syncing.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I fear there hang something...

      No progress bar...
      System-protocol says:

      Source Code

      1. Filter subsystem: Built-in target `write`: Dispatching value to all write plugins failed with status -1.


      And

      Source Code

      1. rrdcached plugin: rdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-used.rrd,[1455384180:1142435840.000000] failed with status -1.


      I ve no plan... :(

      lG
      Petra
      Images
      • repair-1.png

        228.44 kB, 1,920×1,080, viewed 272 times
      • repair-2.png

        116.44 kB, 1,920×1,080, viewed 482 times
      • repair-3.png

        111.89 kB, 1,920×1,080, viewed 261 times

      The post was edited 4 times, last by Petra ().

    • I do a restart, but nothing changed.

      If I look for the RAID Details:

      Source Code

      1. Version : 1.2
      2. Creation Time : Sat Jun 6 14:52:49 2015
      3. Raid Level : raid6
      4. Array Size : 52743518208 (50300.14 GiB 54009.36 GB)
      5. Used Dev Size : 5860390912 (5588.90 GiB 6001.04 GB)
      6. Raid Devices : 11
      7. Total Devices : 11
      8. Persistence : Superblock is persistent
      9. Update Time : Sat Jan 30 14:11:13 2016
      10. State : clean, resyncing (PENDING)
      11. Active Devices : 11
      12. Working Devices : 11
      13. Failed Devices : 0
      14. Spare Devices : 0
      15. Layout : left-symmetric
      16. Chunk Size : 512K
      17. Name : omv:nas (local to host omv)
      18. UUID : beb93a19:6fda2702:d7a28213:274b1c68
      19. Events : 266
      20. Number Major Minor RaidDevice State
      21. 0 8 0 0 active sync /dev/sda
      22. 1 8 16 1 active sync /dev/sdb
      23. 2 8 32 2 active sync /dev/sdc
      24. 3 8 48 3 active sync /dev/sdd
      25. 4 8 64 4 active sync /dev/sde
      26. 5 8 80 5 active sync /dev/sdf
      27. 6 8 96 6 active sync /dev/sdg
      28. 7 8 112 7 active sync /dev/sdh
      29. 8 8 128 8 active sync /dev/sdi
      30. 9 8 144 9 active sync /dev/sdj
      31. 10 8 176 10 active sync /dev/sdl
      Display All


      I ve to click on "RAID Wiederherstellung"/ "Restoration" but lack of courage...

      lG
      Petra
    • The rrd errors are just because you don't have a filesystem mounted. Don't worry about it.

      DON"T REBOOT! If you do reboot, the resyncing just has to start over. Wait until the web interface or the cat /proc/mdstat says clean or active and no resyncing. Then you can reboot or mount the filesystem. This will also fix the rrd errors. Just have to be very patient especially when fixing a raid array this size.

      You don't need to click restore either. That is for replacing one drive. This is not your case.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Ok,
      "RAID Wiederherstellung"/ "Restoration" dont works,
      but at "Filesystem" I select the n/a "xfs" and embed and accept it.
      "RAID" suddenly start the "active resyncing! And it estimate lovely 653 Minutes. :)

      THX a lot 4 giving me the right inspiration :))

      lG
      Petra
    • communication failure!

      Yeah!
      Its all over now again:
      communication failure!
      Only a Reset starts the engine...

      Source Code

      1. /media$ cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. unused devices: <none>


      Maybe the Problem is the second RAID Controller. The next days I will try another one.

      lG
      Petra
      Images
      • repair-4.png

        118.55 kB, 1,920×1,080, viewed 260 times
    • That error is very minor. I wouldn't reboot because of it.

      I agree that you raid controller may be causing the issues. You have a big investment in drives. I will buy some better quality hardware. Even if it is a IBM M1015 flashed with new firmware from eBay.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • The m1015 is an 8 port board. 16 port boards would probably cost quite a bit more than two m1015 (re-badged LSI board). I have an LSI 9211-8i and an LSI 9260-8i. If you have to have a 16 port board, any LSI board would be good in my opinion. Most of these boards have SFF-8087 mini-SAS connectors (one connector can connect to four sata drives). You just need to get the Mini SAS SFF-8087 to four sata cables.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Ryecoaaron ment Aaron?
      Regardless, your tip is really great. Because I think all the Conrollers are Bul.... ;)
      An ASRock X99 WS-E in mind, it would be really too expensive -especially considering that also new RAM and CPU hang on it.

      In the moment there are 4+2 SATA Ports at my ASRock B85M Pro3. Actually I ve an HighPoint Rocket 640L 4Port and anHighPoint Rocket 620 2Port Controller.

      Tomorrow I order the the LSI 9211-8i and hope it is finaly the rescue at my heavy SATA-sea :)
      I could smooch you: THX!!!

      lG
      Petra
    • assembly aborted

      So,
      the Avago SAS 9211-8i SGL-Controller is inside.
      • All Drives are shown
      • SMART also
      • Filesysten is n/a
      • RAID is empty (as before)

      Well, I do your command, but it fails:

      Source Code

      1. /media$ mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdefghijl]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: no RAID superblock on /dev/sdh
      4. mdadm: /dev/sdh has no superblock - assembly aborted

      Do I need a Driver for the Controller?

      lG
      Petra
    • Sorry, first think -than assemble...

      Yeah!
      the fdrive letters have changed!
      "H" is now "L"

      Source Code

      1. /media$ mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdefgijkl]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      4. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
      5. mdadm: /dev/sdc is identified as a member of /dev/md127, slot 2.
      6. mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.
      7. mdadm: /dev/sde is identified as a member of /dev/md127, slot 4.
      8. mdadm: /dev/sdf is identified as a member of /dev/md127, slot 5.
      9. mdadm: /dev/sdg is identified as a member of /dev/md127, slot 10.
      10. mdadm: /dev/sdi is identified as a member of /dev/md127, slot 6.
      11. mdadm: /dev/sdj is identified as a member of /dev/md127, slot 9.
      12. mdadm: /dev/sdk is identified as a member of /dev/md127, slot 8.
      13. mdadm: /dev/sdl is identified as a member of /dev/md127, slot 7.
      14. mdadm: forcing event count in /dev/sdi(6) from 274 upto 276
      15. mdadm: forcing event count in /dev/sdl(7) from 274 upto 276
      16. mdadm: forcing event count in /dev/sdk(8) from 274 upto 276
      17. mdadm: forcing event count in /dev/sdj(9) from 274 upto 276
      18. mdadm: clearing FAULTY flag for device 7 in /dev/md127 for /dev/sdi
      19. mdadm: clearing FAULTY flag for device 10 in /dev/md127 for /dev/sdl
      20. mdadm: clearing FAULTY flag for device 9 in /dev/md127 for /dev/sdk
      21. mdadm: clearing FAULTY flag for device 8 in /dev/md127 for /dev/sdj
      22. mdadm: Marking array /dev/md127 as 'clean'
      23. mdadm: added /dev/sdb to /dev/md127 as 1
      24. mdadm: added /dev/sdc to /dev/md127 as 2
      25. mdadm: added /dev/sdd to /dev/md127 as 3
      26. mdadm: added /dev/sde to /dev/md127 as 4
      27. mdadm: added /dev/sdf to /dev/md127 as 5
      28. mdadm: added /dev/sdi to /dev/md127 as 6
      29. mdadm: added /dev/sdl to /dev/md127 as 7
      30. mdadm: added /dev/sdk to /dev/md127 as 8
      31. mdadm: added /dev/sdj to /dev/md127 as 9
      32. mdadm: added /dev/sdg to /dev/md127 as 10
      33. mdadm: added /dev/sda to /dev/md127 as 0
      34. mdadm: /dev/md127 has been started with 11 drives.
      Display All


      • Raid is shown as clean
      • filesystem n/a

      So I enabled the filesystem
      And -Wonder- The RAID is still clean!

      WinSCP shows all the files, but Windows don't find omv...
      I hope I will remember the right thing :)

      lG
      Petra