snapraid + aufs = problems with parity disc and filesystem reports multiply-claimed blocks

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • snapraid + aufs = problems with parity disc and filesystem reports multiply-claimed blocks

      Problems setting up my first snapraid + aufs which contains 3x3TB discs at the moment, 1 parity and 2 data + content configured in snapraid (running OMV) also created an aufs partition from the 2 data discs.
      I filled up the aufs partition with about 1.5TB of data and restarted the box (power outage and UPS installation) now after restart I receive error messages when booting and doing automatic fs check as below:
      (the data drives are filled with backed up data that is replaceable so no worries there)


      Log of fsck -C -R -A -a
      Wed Apr 22 18:34:07 2015
      fsck from util-linux 2.20.1
      3TB1 contains a file system with errors, check forced.
      3TB2: clean, 51317/183148544 files, 358176784/732566385 blocks
      3TB3: clean, 50603/183148544 files, 358201716/732566385 blocks
      3TB1: Duplicate or bad block in use!
      3TB1: Multiply-claimed block(s) in inode 23592961: 94371842
      3TB1: Multiply-claimed block(s) in inode 26476545: 105906179
      3TB1: (There are 2 inodes containing multiply-claimed blocks.)
      3TB1: File /.wh..wh.orph (inode #23592961, mod time Mon Apr 20 22:51:50 2015)
      has 1 multiply-claimed block(s), shared with 1 file(s):
      3TB1: <filesystem metadata="">
      3TB1:
      3TB1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      (i.e., without -a or -p options)
      fsck died with exit status 4


      OK, I managed to circumvent that with ctrl+d since I did not want to mess it up more with fsck so I booted and did a snapraid sync and this follows:


      root@mediavault:~# snapraid sync
      Self test...
      Loading state from /media/fae60755-feec-49f0-bf97-b884c09d9f3b/snapraid.content...
      Scanning disk Data1...
      Scanning disk Data2...
      Using 1144 MiB of memory.
      Saving state to /media/fae60755-feec-49f0-bf97-b884c09d9f3b/snapraid.content...
      Saving state to /media/d4873045-bedf-416c-b228-245599be44dc/snapraid.content...


      after this follows about 100k lines of outofparity messages as below and which ends with:

      outofparity /media/d4873045-bedf-416c-b228-245599be44dc/xxx//xxx/Tagged Image File/601-664/file661.TIF
      WARNING! Without an accessible Parity file, it isn't possible to sync.


      So now I am running fsck now on the unmounted 3TB1 drive, and it's been running for 3 hours now, can I expect this to take long?
      Is it ok that I answered yes to the questions posed?
      Is it normal that the snapraid.parity file has "multiply-claimed blocks" ?

      See output below:

      fsck from util-linux 2.20.1
      e2fsck 1.42.5 (29-Jul-2012)
      3TB1 contains a file system with errors, check forced.
      Pass 1: Checking inodes, blocks, and sizes
      Inode 15 has an invalid extent node (blk 199753733, lblk 187242496)
      Clear<y>? yes
      Inode 15 has an invalid extent node (blk 9605, lblk 4294967295)
      Clear<y>? yes
      Inode 15, i_blocks is 1596588200, should be 1496891544. Fix<y>? yes

      Running additional passes to resolve blocks claimed by more than one inode...
      Pass 1B: Rescanning for multiply-claimed blocks
      Multiply-claimed block(s) in inode 15: 144179201
      Multiply-claimed block(s) in inode 23592961: 94371842
      Multiply-claimed block(s) in inode 26476545: 105906179
      Pass 1C: Scanning directories for inodes with multiply-claimed blocks
      Pass 1D: Reconciling multiply-claimed blocks
      (There are 3 inodes containing multiply-claimed blocks.)

      File /snapraid.parity (inode #15, mod time Wed Apr 22 19:34:31 2015)
      has 1 multiply-claimed block(s), shared with 1 file(s):
      <filesystem metadata>
      Clone multiply-claimed blocks<y>? yes


      So now I have no idea what to do nor what I am doing, I have tried googling but I'm having a hard time finding someone who has the same problem, so now I turn to these forums, could someone enlighten me to where I am failing?

      Thank you all in advance and sorry for the wall of text and logs.

    • I would say you need to fsck all three drives to fix that problem. You can delete any .wh..wh files/directories. After that snapraid should be ok.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Allright, I will fsck all the drives, although only the one marked for parity reported errors.
      Did I do the right thing in "cloning multiply-claimed blocks" or is this a familiar "fix" for such problems?
      have you heard about these kinds of problems before?

      about the .wh..wh files/directories - can I delete them from all of the drives? what are they for?

      hopefully fsck will fix any errors now and it wont take forever to finish, and afterwards I hope my snapraid sync will run without hiccups.
      *fingers crossed*
    • I haven't seen the cloning problem before. It is a filesystem issue not aufs or snapraid.

      The .wh..wh files are temporary files aufs uses.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • That is too long for it unless there are serious drive problems. I've never been in that situation. Not sure what to do now.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!