Snapraaid question

  • I have played around with omv5 on a dev machine and now want to reinstall my production machine (home media sever) with OMV5. It is currently running an up to date omv4 install.


    I know my config is over kill, but isn't that what we do?


    I have a 24 drive slot server. I am in the midst of updating my snapraid config from 3 parity to 4 parity drives as I have them lying around.


    Will snapraid recognize the data drives and parity drives as a snapriad setup after I reinsert them after the new install. I under stand I will have to install the appropriate plugins, snapraid and unionfs but want to know if there are any pitfalls.


    My drive set up:

    3 x hgst 3TB parity drives. Currently adding parity drive 4

    8 x hgst 2TB data drives. Don't like high density disks due to higher failure rates, and my chassis has the space and 12 more open slots for expansion.


    Any advise is greatly appreciated.


    Thanks in advance.

  • Will snapraid recognize the data drives and parity drives as a snapriad setup after I reinsert them after the new install. I under stand I will have to install the appropriate plugins, snapraid and unionfs but want to know if there are any pitfalls.

    This is an interesting question, but it's something I've never tested. However, I'd recommend booting from thumbdrives.

    Thumbdrives are inexpensive and they're easy to clone so you could test this without losing a working configuration or risking existing data. You could confirm that it works before committing to a new configuration.

  • I actually have another server with 4 x 4tb drives. Except for snapraid it will be the same.


    All data will be moved to this sever once it's operational and all the bugs are worked out of my set up.


    Then I will rebuild the current server, and take my chances.


    Thanks for the advice.

  • Ok, sanity check needed. My apologies in advance for the long post, but I want to give as much info as possible for feedback, and yes I have 2 backups of all data just in case.


    Before pulling the old os drive I did a snapraid -R sync to get rid of all the file fragments. Then I scrubbed the entire array, snapraid scrub -p 100 -o 0.


    Once the fresh OMV5 install was setup and ready, I first tried copying my backed up snapraid.conf file and booting the server. No joy. Rebuild array time.


    I rebuilt the snapraid array through the OMV5 gui. I took meticulous care in reassigning each of the drives the same name as they were in the previous OMV4 array. It was a little tougher this time because snapraid version 3.3.7 from OMV4 called out all the disk by type and serial number where as the OMV5 version calls out the drives by uuid. I just made sure that each of the 8 data disks and each of the 4 parity disks kept the same name. d01, d02, d03 ... etc


    Old:

    #####################################################################

    # OMV-Name: d01 Drive Label:

    content /srv/dev-disk-by-id-ata-Hitachi_HUA723020ALA640_MK0171YFHAJM4A-part1/snapraid.content

    disk d01 /srv/dev-disk-by-id-ata-Hitachi_HUA723020ALA640_MK0171YFHAJM4A-part1

    #####################################################################


    New:

    #####################################################################

    # OMV-Name: d01 Drive Label:

    content /srv/dev-disk-by-uuid-36b30f8f-2e71-43c6-9cff-48a552f64331/snapraid.content

    disk d01 /srv/dev-disk-by-uuid-36b30f8f-2e71-43c6-9cff-48a552f64331

    #####################################################################


    Before pulling the old os drive I did a snapraid -R synce to get rid of all the file fragments. Then I scrubbed the entire array, snapraid scrub -p 100 -o 0.

    I laid unionfs down on top of this with the rebuilt array with the same previous 8 data disks.


    Now for the test:

    snapraid status

    Self test...

    Loading state from /srv/dev-disk-by-uuid-36b30f8f-2e71-43c6-9cff-48a552f64331/snapraid.content...

    Using 485 MiB of memory for the file-system.

    SnapRAID status report:


    Files Fragmented Excess Wasted Used Free Use Name

    Files Fragments GB GB GB

    10563 0 0 - 934 1032 47% d01

    1719 0 0 - 926 1040 47% d02

    1639 0 0 - 923 1043 46% d03

    1523 0 0 - 931 1035 47% d04

    1898 0 0 - 932 1035 47% d05

    2965 0 0 - 928 1039 47% d06

    5348 0 0 - 922 1044 46% d07

    6147 0 0 - 868 1099 44% d08

    --------------------------------------------------------------------------

    31802 0 0 0.0 7367 8371 46%



    100%|*

    |*

    |*

    |*

    |*

    |*

    |*

    50%|*

    |*

    |*

    |*

    |*

    |*

    |*

    0%|*_____________________________________________________________________

    0 days ago of the last scrub/sync 0


    The oldest block was scrubbed 0 days ago, the median 0, the newest 0.


    No sync is in progress.

    The full array was scrubbed at least one time.

    No file has a zero sub-second timestamp.

    No rehash is in progress or needed.

    No error detected.


    Ok, so far seems good.


    Try a sync and see what happens because I have backups and live dangerously.

    snapraid sync

    Self test...

    Loading state from /srv/dev-disk-by-uuid-36b30f8f-2e71-43c6-9cff-48a552f64331/snapraid.content...

    Scanning disk d01...

    Scanning disk d02...

    Scanning disk d03...

    Scanning disk d04...

    Scanning disk d05...

    Scanning disk d06...

    Scanning disk d07...

    Scanning disk d08...

    Using 488 MiB of memory for the file-system.

    Initializing...

    Resizing...

    Saving state to /srv/dev-disk-by-uuid-36b30f8f-2e71-43c6-9cff-48a552f64331/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-96c640aa-25ae-42b6-a5b2-8f9daa836f57/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-83e4b1f3-3b21-49a7-a5fa-cb0a56d45259/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-053f9e7d-7f87-4b3c-a52c-241dbca4f55c/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-49da7c37-6529-43f9-ab2d-2663f9102127/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-320111a8-6ba3-4d6a-a0ff-ef8972925e19/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-c39c1a77-dfe2-438d-867b-f9f36f9220c4/snapraid.content...

    Saving state to /srv/dev-disk-by-uuid-b3b2351d-cf6e-48a0-ba52-7b832ea24e37/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-36b30f8f-2e71-43c6-9cff-48a552f64331/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-96c640aa-25ae-42b6-a5b2-8f9daa836f57/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-83e4b1f3-3b21-49a7-a5fa-cb0a56d45259/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-053f9e7d-7f87-4b3c-a52c-241dbca4f55c/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-49da7c37-6529-43f9-ab2d-2663f9102127/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-320111a8-6ba3-4d6a-a0ff-ef8972925e19/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-c39c1a77-dfe2-438d-867b-f9f36f9220c4/snapraid.content...

    Verifying /srv/dev-disk-by-uuid-b3b2351d-cf6e-48a0-ba52-7b832ea24e37/snapraid.content...

    Syncing...

    Using 96 MiB of memory for 32 cached blocks.

    Nothing to do


    Here's where my sanity check comes in.

    Everything appears to be functioning as it should.


    Is there something else I can do to see that all went well?


    Any feedback is greatly appreciated.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!