Building a new NAS (Home use only) Hardware and Software help is needed

  • Hello,


    I think about building a new NAS for my Home use only, and i need some suggestions and help what i should do.


    My Idea was to custom build one.


    - I want around 40tb of usable storage! (should be enough)(I think of 6x8tb hdds)

    - I want to use a single board computer (SMC) because of space reasons

    - I plan to use OMV5 or 6 in the future

    - I want to use Portainer for Docker

    - The data mostly rests on the server and is very rarely moved. Only my photography projects and movies.

    - The NAS should handle my data of course, some Dockers like (Nextcloud, bitwarden, wordpress, jellyfin,(...)) and that's it. So i not need to edit videos or anything on my machine

    - I plan to use external hdds as a backup


    - Remember it is a home server and not a high end crazy 800mb/s transfer speed machine. So normal speed from 30 to 100mb/s is totally okey for my little use.


    __________________________________________________________________________________________

    Now to the questions.


    - What SMC i should use? Can someone recommend a good one with many SATA ports or with circet board expension card for all the harddrives?


    For me it is quite difficult to find something what i think could be suitable for me, so what do you recommend me or have experiene in?


    - What Raid would you suggest me? I thought of Raid 5


    - What is about Unraid and other NAS Raid softwares? I plan to use OMV (because my RP4 system also uses it and it is comfortable) but i read a lot about people saying that unraid and so on is the best software etc.


    - how about software upgrade ability of the software? Will it damage the raid or my system?


    Does someone have experiences with this? or can recommend me something? Also can i handle everything also in OMV or is there any difference in OMV and other software?


    - What harddrives should i use? No SMB drives i have learned, but any good experience with some drives?


    __________________________________________________________________________________________


    This is it for now. Future questions will come up later i guess.


    It would be very kind if the people who have experience and ideas would tell me


    Thank you:)

    • Offizieller Beitrag

    I think about building a new NAS for ...

    I give you my opinion to some questions.


    Regarding the motherboard, I would not know what to advise, there are many options. If you are going to configure from scratch, I suggest you consider ECC memory, if your budget supports it. For your proposed use, you don't need a powerful processor.


    Regarding the Raid. First think about why you want Raid. Raid is not a backup. Raid only protects against disk breakage and / or bitrot (and not always). Absolutely necessary to have a BACKUP COPY. From here on, if you still want a Raid, with eight discs you should raise at least two parity discs (in my opinion).


    Older Raid systems will only protect you from disk breaks. I would advise you to snapraid and mergerfs if your data is not moving much. In this case with two parity disks. If you have some experience ZFS will do well for you too, it could be RaidZ2. Both systems have OMV support from GUI, although with ZFS you may need some CLI, it depends on where you want to go.


    Software upgrade capability. Behind OMV is debian. It is one of the most stable systems, there is no fear. You have the option to install the proxmox kernel if your hardware is supported, perhaps with more stability.


    What hard drives should I use? Choose disks made for NAS and well-known brands, and type CMR. For example Western Digital Red Plus or Seagate Iron Wolf. Avoid SMR discs.


    OMV is very versatile, you can use the bare minimum with the default settings and it will work. With a small learning curve you can configure a perfect server to suit you, it depends on your interest.


    Good luck.

  • Regarding the motherboard, I would not know what to advise, there are many options. If you are going to configure from scratch, I suggest you consider ECC memory, if your budget supports it. For your proposed use, you don't need a powerful processor.

    Thank you for the info, i will consider that in my motherboard selection.



    Regarding the Raid. First think about why you want Raid. Raid is not a backup. Raid only protects against disk breakage and / or bitrot (and not always). Absolutely necessary to have a BACKUP COPY. From here on, if you still want a Raid, with eight discs you should raise at least two parity discs (in my opinion).

    For the Raid thing. First of all thank you for the information and help. The reason i want to use a raid is beacuse i want it comfortable on one big "volume" on my computer and do my sub share folders. I don't like seperate volume drives so much. But that is my personal preference. Thanks, i know RAID is no backup, thats why i want to use the same space as external backup drives. I also could just use my external drives but a combined RAID system is just more convenient to use in my home network.

    I thought of using six disks with 8tb each maybe. Everything is just a thought.

    Older Raid systems will only protect you from disk breaks. I would advise you to snapraid and mergerfs if your data is not moving much. In this case with two parity disks. If you have some experience ZFS will do well for you too, it could be RaidZ2. Both systems have OMV support from GUI, although with ZFS you may need some CLI, it depends on where you want to go.

    For the RAID type. Basically i just know the older normal basic RAID types (0,1,2,3,5,6,10)based on my medical computer science study. I totally don't know about the other snapraid, mergerfs etc.

    What is the reason they are better? Do you have experience with them?


    The reason i like OMV is because for me as a NAS beginner it is very convenient and clean to use. I can do my little settings i want and thats it.

    Software upgrade capability. Behind OMV is debian. It is one of the most stable systems, there is no fear. You have the option to install the proxmox kernel if your hardware is supported, perhaps with more stability.

    What is a proxmox kernel?

    OMV is very versatile, you can use the bare minimum with the default settings and it will work. With a small learning curve you can configure a perfect server to suit you, it depends on your interest.

    Yeah that's why i like OMV. Like i said above, i really don't have a extreme use, just home server use with me and my girlfriend.

    The only external services i now run and also want to run on the new NAS is nextcloud (Access my data on the go or for any situation i forgot my laptop, at work or anywhere else) and bitwarden. And i use syncthing for syncronizing my data from my PC and Phone to my NAS.




    Thanks for your help already :)

    • Offizieller Beitrag

    What is the reason they are better? Do you have experience with them?

    If the only reason is to have a single volume with all disks, a raid is not necessary. Mergerfs merges the disks into a single volume without the need for raid. In OMV 5 you can use unionfilesystem, in OMV 6 (there is a short time until it arrives) it will have to be mergerfs ... I think.

    If in addition to that you want to have protection against disk crash and data corruption, you can use snapraid at the same time. The combination of the two is a good solution. Easy to manage in OMV. It is intended for files that do not change frequently.

    This works fine, I have used it and it has been fine for me.

    And ZFS does the same as the above but is a more "paranoid" system about data security. For example, when you change a file that already exists, first copy it to another part of the disk and then delete the existing one. It has other virtues, built-in compression, and more. BTRFS is similar to ZFS but you don't have it built into OMV. These two are the most advanced storage systems out there.

    The reason these systems are better than a traditional raid is that they protect against data corruption. Both snapraid and ZFS and BTRFS do. With a traditional raid you don't have this protection.

    If you don't have much experience my advice is to start with snapraid and mergerfs, it is very easy to configure and maintain.


    The proxmox kernel is a ubuntu-based kernel. It can be installed from OMV replacing the one that comes by default. Unless you have a reason to do it, if you do not have experience I think you should not do it. First install OMV and play with it for a while.


    As for "nextcloud", it is fine if you are going to use its full potential. If all you want to do is synchronize with your PC and phones, I advise you "syncthing". Much simpler and it fulfills that function perfectly.

  • Wow thank you very much with your explanation! snapraid with mergerfs or the ZFS sounds really great! What leads the question to me, if they are really so superior to normal RAID systems then why so many people or also companies do still use a traditional RAID system? The only thing what would come into my mind would be if normal RAID has a higher performance or faster transfer speeds. Other then this i see no reason anymore to use something else then snapraid with mergerfs, ZFS or BTRFS.


    I will take your advice and i guess when i finished planing my project i will use a snapraid with mergerfs because it is so well secured. In my case i would say about 70% of my data is very rarely moved but more storaged. The other 30% i move all the time for astrophotography projects with data sizes of 15 to 100gb depending on how large the project is. That is it basically.

    The proxmox kernel is a ubuntu-based kernel. It can be installed from OMV replacing the one that comes by default. Unless you have a reason to do it, if you do not have experience I think you should not do it. First install OMV and play with it for a while.

    Ah okey, so what is even t point to change the kernel? Does it give me a benefit in some special use or what is the point of it? Because wouldn't the normal standart kernel be enough?


    As for "nextcloud", it is fine if you are going to use its full potential. If all you want to do is synchronize with your PC and phones, I advise you "syncthing". Much simpler and it fulfills that function perfectly.

    I currently use syncthing on my raspberry Pi4 experimental NAS. I also use nextcloud to access the to my parents also. So i still like the additional features it gives me instead of just using syncthing.




    I just read your below description that you don't speak english and use a translator. But your english is very well. What are the languages you speak? I am a german and also don't have english as my mother langauge.


    Kind greetings :)

  • Wow thank you very much with your explanation! snapraid with mergerfs or the ZFS sounds really great! What leads the question to me, if they are really so superior to normal RAID systems then why so many people or also companies do still use a traditional RAID system? The only thing what would come into my mind would be if normal RAID has a higher performance or faster transfer speeds. Other then this i see no reason anymore to use something else then snapraid with mergerfs, ZFS or BTRFS.

    One reason to use RAID is to have high availability of the data. In many RAID types when a drive fails the data on the lost drive is still available within the array, although usually at significantly reduced performance. This means you can still access all the data while resilvering the lost drive.


    With SnapRAID, the lost drive can be recovered, but the lost data is not totally accessible until the recovery is totally complete. For large drives this can take a lot of time.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • One reason to use RAID is to have high availability of the data. In many RAID types when a drive fails the data on the lost drive is still available within the array, although usually at significantly reduced performance. This means you can still access all the data while resilvering the lost drive.


    With SnapRAID, the lost drive can be recovered, but the lost data is not totally accessible until the recovery is totally complete. For large drives this can take a lot of time.

    Ohh okey so with a snapraid it can be recovered but is not accessable during the recovering process but with RAID it is accessable while recovering.

    I don't have a problem to not access the data while the recovering process. How long would it take if for example i loose a 8tb drive and replace it and the snapraid recovers the data? Is it endurable or does it take crazy long time like one week

  • I don't have any real idea how long it would take to recover an 8TB drive with SnapRAID. But up to 24 hours would not surprise me.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • One thing i questioned myself.

    How to determine how much hdd raw space i need to get 40tb of usable storage out of a snapraid with mergerfs?


    For example if i use 10tb drives. Then each drive has a roughly 9,09tb of usabale storage. Then lets say i take 5 drives so in total 45tb of usable storage. Then how many parity disks i need?

  • There is no connection between SnapRAID and mergerfs.


    Your data array of five 10TB drives would need one drive of at least 10TB for parity. Two would be better if you can afford it.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • There is no connection between SnapRAID and mergerfs.


    Your data array of five 10TB drives would need one drive of at least 10TB for parity. Two would be better if you can afford it.

    Okey thanks.


    If i build my own NAS and put 45tb inside then i have to be able to use another two as parity. Otherwise i should not plan a NAS this size.


    Is there some kind of rule of how much parity space i need for how much data array storage with what drive size and number of drives?

    • Offizieller Beitrag

    Is there some kind of rule of how much parity space i need for how much data array storage with what drive size and number of drives?

    The only rule to follow in snapraid is: the parity drive must have a capacity equal to or greater than that of the data disk with the highest capacity.

    From that, it depends on the guarantees that are sought. The more parity disks, the more disks can be broken before recovery. A reasonable thing for normal home use might be to add one parity disk for every two or three data disks.


    Here you have a lot of information about snapraid https://www.snapraid.it/

    and here about unionfs https://unionfs.filesystems.org/

    here general information about OMV https://openmediavault.readthedocs.io/en/5.x/


    Ask what you do not understand and I will help you in what I can.

  • Ah okey, thank you


    i will read it

  • Because the intel Community forum can't help me much i need to ask here.


    Just a idea.

    Maybe i could use a Intel NUC as base for my home build NAS. Then i have all the ports i need.

    When i for example say i "split" (share) the SATA connector into 5 SATA ports (I know about the slow down because the band with is shared) and the M.2 ssd port to another couple SATA ports (The shared speed should still be enough for my hdd disks), and i also would split the single hdd power output from the NUC to the new in total 7 HDDs, would the power supply be able to handle all the hard drives or is one single power connector from the NUC not enough for so many hard drives? Because i don't have another idea how to power them or with a extra power supply just for the HDDs.


    Okey idea or bad idea?


    The thing is i want to avoid bulky systems with a big case like a tower with a big motherboard a a lot more power consumption.

    Or does someone know something small but with lots of SATA ports?

    • Offizieller Beitrag

    Maybe i could use a Intel NUC as base for

    I would say that to install 7 hard drives you need a medium (or large) box. What takes up the most space on your system are the disks. A mini itx board fits in such a box. You're not going to save a lot of space with a nuc.

    What are the languages you speak?

    I am Spanish ;)

  • There are Mini-itx server boards that have as many as 12 SATA ports. I ran a 12 port ASRock C2550D4I in a tiny Silverstone DS380 case that holds 8 hot swap drives plus four internal 2.5in drives. I extended it with a homemade DAS in another identical case that holds another 8 drives for a total of 15. I ran this for more than five years and still have it.


    I recently moved to a used Chenbro NR12000 1U server that has six SATA ports and eight SATA/SAS ports. It holds twelve 3.5in drives and one 2.5in drive. Not small, potentially noisy, but very cheap.


    I have a thread on the forum about it:


    Monster 1U Server

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • I would say that to install 7 hard drives you need a medium (or large) box. What takes up the most space on your system are the disks. A mini itx board fits in such a box. You're not going to save a lot of space with a nuc.

    Yeah true.

    Sorry if i said it missunderstandingly. I know most space is used by the disks, but i like smaller cases in terms of hight and thex can also be longer but not too long.

    For example those cases like Synology is using are something like the formfactor i want.


    For example cases like this where the disks are flat inside on top of each other

    Eolize SVD-NC11-4 Mini ITX PC-Gehäuse für NAS System: Amazon.de: Computer &  Zubehör

    __________________________________________________________________________________

    Or whe the disks are vertically like this

    Kobol - Helios64 Open Source NAS

    NAS Gehäuse günstig kaufen | NAS Gehäuse bei CYBERPORT

    __________________________________________________________________________________


    Those are very compact cases and thats why i also like small motherboard what can fit inside something like this. So i don't want a full ATX board for example.

    That's what i meaned.



    I am Spanish ;)

    Cool :)



    There are Mini-itx server boards that have as many as 12 SATA ports. I ran a 12 port ASRock C2550D4I in a tiny Silverstone DS380 case that holds 8 hot swap drives plus four internal 2.5in drives. I extended it with a homemade DAS in another identical case that holds another 8 drives for a total of 15. I ran this for more than five years and still have it.


    I recently moved to a used Chenbro NR12000 1U server that has six SATA ports and eight SATA/SAS ports. It holds twelve 3.5in drives and one 2.5in drive. Not small, potentially noisy, but very cheap.


    I have a thread on the forum about it:


    Monster 1U Server

    Thanks for the info. That is something search for. Small formfactor (Not extremely powerful because as seen above in the firdt post i just do home use stuff) lot of SATA ports or A PCL-e for extending some by a card what fit in a small case.


    I am currently searching for motherboards like this but it is very difficult for me because i just have build one computer myself and don't research so much for motherboards for NAS use. So this is quite new to me.

  • Sorry to bother you guys but i still need some advice for hardware.


    I still very struggle to determine how much power the processor of my NAS should have. As Chente said already i not need a powerful processor for home data access, some dockers like reverse proxy, Nextcloud, Syncthing, bitwarden, wordpress etc. but how much or what is less powerful?


    I try to find good mainboards i maybe consider but already then i need to know what socket i need for what CPU or if a CPU already on the board is enough or what is cheaper and draws less power etc. And i don't know where to start what i need.


    For example what i lokked at:


    ASRock Rack C3558D4I-4L -> lot of SATA ports and extention possible -> Is an Atom enough??


    ASRock J4005B-ITX (90-MXB6S0-A0UAYZ) -> Also good amount SATA ports but need to extent by a PCLe card for my 7

    disks -> Intel Celeron?


    Or do i need a socket 1156 with a Core i3?


    Has someone some advice for me?






    • Offizieller Beitrag

    ASRock Rack C3558D4I-4L -> lot of SATA ports and extention possible -> Is an Atom enough??

    ASRock J4005B-ITX (90-MXB6S0-A0UAYZ) -> Also good amount SATA ports but need to extent by a PCLe card for my 7

    disks -> Intel Celeron?

    Either of the two plates you propose should work for you. I don't see any apps that need more power on your list.

    To consider. The C3558D4I-4L board is designed for server use and the J4005B-ITX board is designed for desktop. You are going to set up a server. If you can afford the price difference, the first one will work better. ECC memory especially, in addition to other features. Intel quicksync can only be missing if you are transcoding with jellyfin. J4005 has it, C3558 doesn't.

    Keep in mind that they are both atom, performance may seem slow at some point. It depends on what you do with it. You will still be able to virtualize with KVM if you need it in the future, keeping the above in mind.

    What I have seen is that both are from 2017. Perhaps there is something more current on the market, with better performance and lower consumption ...

    Or do i need a socket 1156 with a Core i3?

    You don't need it. I use an old one because I already had it and it's free for me. If I wanted to buy now I would look the same as you are looking. Or alternatively a second hand xeon server.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!