[HowTo] SnapRAID in OMV

  • Thanks Solo for your directions.
    The fix did work, even I don't understand how it can generate 23 errors when parity is fresh and old disk not failed, but it regenerated the files in the new drive.


    But unfortunately I must say that my disk catalog is trashed. I only changed SAS0 but after a reboot my 3th disk is missing and the Snapraid pointer to SAS2 shows "false" as volume.
    Now I removed all data disks from the snapraid, completely deleted the aufs pool and I'm trying to have OMV see my 3th disk and realize that disk1 (old) doesn't exists now.
    Because it's still showing the old disk in filesystems and with the same names as the new one.


    It's a complete mess. And I couldn't do a rescan to solve the fix issues. I suppose that as I still have the old disk I will be able to do a diff between them.


    Edit:
    I have to remove plex plugin because was using the old disk for it's working folder and didn't let me change the volume.
    Now filesystems lets me unmount the new drive but not the old.
    MySQL plugin removed. ||
    Manually unmounted the old drive and changed it's label. SAS2 still missing.

  • Thanks Solo.


    Seems that the missing drive had a bad data cable. It could happened during the disk replacement. I've replaced the cable and now it's online again. I had to manually recreate it's mount point because it complained about that while mount on the gui. (strange)
    I've just recreated my snapraid configuration and have run a status command.
    It showed at the top:

    Zitat

    WARNING! Content file '/media/38488698-c665-4b76-a7fb-65dd305a3ac4/snapraid.content' not found, trying with another copy...
    UUID change for disk 'SAS0' from '8285e427-3f7b-4d43-a72e-49192f94bdc4' to '38488698-c665-4b76-a7fb-65dd305a3ac4'


    And then reports all ok.


    Then I ran a diff command, and I don't know how to interpret:


    *If you want the complete log I've saved it.


    Shouldn't be better removing all signs of snapraid and start from scratch?
    I have the old drive mounted as SASX, alone. Should I transfer snapraid files from there before doing anything?

  • I suspect the old drive was failing because it has the pool share folder in it. It was an old, but working drive, and perhaps the share made it getting too much access. I don't use that raid but during the dinner. Does it make sense?

  • You can ignore the WARNING Content file '.../snapraid.content' not found message. We have another .content-files and SnapRAID is using it automatically.


    to the Log-file:
    I can't see anything bad.
    check with grep updated /path/to/logfile if your files are correct and it puts out the "updated-ones". Do the same with removed and added. If all is OK, you can run SYNC again.

  • yes, with the logfile I meant the saved "snapraid diff"-output.
    works for both.
    use either:

    Code
    snapraid diff | grep updated


    or

    Code
    grep updated /path/to/saved/diff_output_before


    the first one is noticable slower, because here runs snapraid diff again. With the second one you are parsing only the saved output -> txt-file

  • Wow! I have 612MB of FIX file showing lines and lines like:

    Zitat

    error:22183:SAS0:D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi: Read error at position 1039
    fixed:22183:SAS0:D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi: Fixed data error at position 1039
    error:22184:SAS0:D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi: Read error at position 1040
    fixed:22184:SAS0:D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi: Fixed data error at position 1040
    error:22185:SAS0:D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi: Read error at position 1041
    fixed:22185:SAS0:D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi: Fixed data error at position 1041


    Do you know how I can get only the files affected? Like getting only the part between 3th and 4th ":" and removing duplicates?

  • Do you mean that

    Zitat

    D1/Series/Star Trek Enterprise/Temporada 2/Star Trek- Enterprise - 2x08 - El comunicador.avi


    try this: (untested)

    Code
    snapraid fix | sed 's/.*:.*:\(.*\):.*/\1/g' | sort -u


    or

    Code
    cat /path/to/snapraid_fix/output | sed 's/.*:.*:\(.*\):.*/\1/g' | sort -u
  • Thank you very much.
    Now that I can see the log report in a readable shape I'm understanding better how it works.


    It created a 612MB file because is logging everything that is doing.
    Most of the records seem the missing files on the new drive being recovered from the parity, like:

    Zitat

    error:549:SAS0:D1/Peliculas/Corrupcion en Miami.avi: Read error at position 277
    fixed:549:SAS0:D1/Peliculas/Corrupcion en Miami.avi: Fixed data error at position 277


    Isn't it?
    Then I must only focus on the records marked as "unrecoverable" that summarized to 13 but they were only at two files. Right?


    So the facts in my case, so anyone can benefit of the experience, where:

    • I got a messed drives configuration because all shares, dependencies on the bad drive must be dropped before replacing it. (media manager services, databases and pooling included) If shares aren't dropped, the old drive register still exists and cannot be consumed by the new drive.
    • When replacing a drive, before turning on the server, you must make sure that all drives are operative and wiring/connections in good shape. With one drive missing if I applied the complete fix on the interface, something bad could have been happened (don't know what)
    • The fix log is useful, but shows huge amounts of trash. Unrecoverable registers are the interesting part.
    • I did miss a pretty amount of files in the recover and that alarmed me. I must remember though that if I program a exclusion, those files won't have redundancy.

    My snapraid is operative again, I've been using it last night and it's waiting for it's first sync. I hope AUFS doesn't give me any problems.

  • Hello,


    I plan to install and configure snapraid on my Omv installation.
    Before going for it I would like to get some additionnal information.
    My current configuration is as follows: 9 disks of 4TB.
    I plan to use 2 parity disks for this. Is this enough?
    What is the maximum filling ratio for the data disks that I need to respect in order to not fill up my parity disks.
    Is there any particular aspects I need to take into account with this kind of configuration.


    Thank you for your help.


    Sined

  • according to this: http://snapraid.sourceforge.net/faq.html#howmanypar
    two parity drives are enough for 5-14 drives
    Make sure your Parity-drives are at least the same size as the largest HDD in your Array. I would say the max filling rate for the Data-drives is ~95%. If you are using the aufs-plugin, the files will be evenly distributed across the Disks. So when all your drives are 85% filles you can easily add 1, 2, ... new ones and you are fine.


    http://omv-extras.org/simple/index.php?id=plugins-stable -> aufs

    Zitat

    Directories are called branches and pooled into one share. When files are written to the pool, they are evenly distributed across the branches. Branches can be on the same drive or different drives. Drives can be the same or different sizes.


    The aufs-pool can then be shared over SMB, NFS, $whatever ...

    • Offizieller Beitrag

    The new unionfilesystems plugin allows you to create either pool.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I now this is a little of topic but since you started this discussion allow me to ask a brief question:


    What is currently the best solution for pooling?


    -AUFs (my impression: i had problems with permissions moving files on the poolshare)
    -MHDDFS (my impression: works flawless but is slow and resource-intensive)


    and now there is: "unionfilesystems"


    Rgrds
    firsttris

    • Offizieller Beitrag

    unionfilesystems isn't another pooling option. It just combines the aufs and mhddfs plugins into one plugin with a new interface.


    Was your permissions problem an issue with ACLs?


    I didn't think mhddfs was slow. It is a fuse filesystem which makes it a little slower but not bad.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!