Posts by snoopy913

    I basicly got exact what I mentioned above, the Adaptec 7 Series Card worked out oft the box and has a HBA mode, so I just use it as an HBA.

    I also switched to Snapraid and MergerFS as my array is only media.


    I would also love the idea of a HW List, I just searched for Debian compatibility.


    As a case I got the Node 804, which is now full.

    Later I will switch to m.2 SSDs and larger 2.5" drives beside the media array.

    I hope to stay with 8 drives at least for the next 3-4 years.

    The PSU should be fine, as the toshiba HDDs use also the 12V line.

    One issue is that I want to use the HW transcoding feature of Plex, which is best working with intel Quick Sync and that means I need a intel core CPU... I didn't find any fitting server grade m/board and therefore also ECC Support is hardly available, thats why I would stay with consumer tech.


    I had a look on the HBAs, but didn't find any good deal, other then the Adaptec 71605 which is available secondhand for 80€ in Germany.

    I know.it is not generally recommendex, but they are mentioned to have debian support and did not yet find any hints to issues.

    I will of course update if it is working.

    thanks, that goesin the same direction as I already thought.


    I am currently thinking about this setup:

    • Motherboard: ASRock B660M Phantom Gaming 4
    • Or: MSI MAG B660M MORTAR, would habe DDR5, 6x SATA and 2,5GBaseT
    • CPU: Intel Core i5-12400
    • RAM: 2x16GB
    • Expansion Card: Adaptec RAID 71605


    I would use the Card without RAID and stay with the SW RAID, if there is no significant benefit.


    Is there any improvement to the HW possible?

    Hi all,


    I thougth I have a simple problem, that I just need more SATA connectors, but now I think I will need to build a new NAS...


    Current Setup:

    • Case: Node 804
      • basically enough HDD space and cooling currently.
    • CPU / Motherboard: ASRock J5040-ITX
      • Great CPU with enough power, mostly utilized at 20-30%
      • Motherboard is more the issue, as it has to less SATA ports and only PCIe 2.0 x1
    • RAM 16GB
      • is mostly utilized at 30-50%, so it's fine for now
    • PSU: 400W be quiet System Power 9 CM
      • until now it managed all drives, but I really don't know how much it is utilised
    • Expansion Card: InLine 76617F 8 Port PCIe 2.0 x1
      • Currently with 4 HDDs and it takes a lot of time to "initialise" the RAID on startup or when the HDDs are woken up from standby
      • When in normal use, the decreased throughput is currently not an issue
      • The Card has 2 Marvell 88SE9215 but only 1 (4 SATAs) is recognised when checking lspci
    • 2,5" SSD 256GB with OMV and some config paths for the docker container
      • far from to less space
    • HDDs: 6x14TB Toshiba MG07/MG08 SATA in RAID5 (is getting to small) and a 4TB WD with Dokuments
      • upgrade will come with 2 additional 14TB drives


    Currently running Services:

    • OMV with SMB shares
    • Docker
      • Plex Media Server with mainly 4K content, currently ~50TB
      • vaultwarden, syncthing, nextcloud, homeassistant (but not very used very much), heimdall, glances
      • 4-5 other selfhosting solutions, but not very CPU/RAM intensive
      • If I upgrade, I would plan with a lot of buffer

    My possibilities to get more storage:

    • Change to larger HDDs: with 18TB I would get <24TB more space, but the bottleneck of the PCIe Card remains, also my HDDs are still fine
    • SATA Port Multiplier: one of the worst solutions I could use
    • USB SATA Hub: maybe not as bad as the Port multiplier. No idea how this would work with the RAID
    • New Mainboard, CPU, RAM
      • As I use the Plex Server, I would go for the Intel 12th / 13th Gen with iGPU, they would have AV1 transcoding support.
      • And the Mainboard should have as many SATA Ports as possible
      • Mainboard Requirements: Sockel 1700, mATX, best 8 SATA, 2+ PCIe extensions
      • I found this one: Biostar B660MXC PRO but nowhere to buy it
      • Or I go for supermicro, but there I have to go for other CPUs i think
    • I could also separate the storage and self-hosting stuff
      • there are supermicro boards with Atom CPUs and many SATAs
      • This would maybe double the energy I need, and I need everything twice


    Does anyone have other ideas how I could increase my SATA ports? Or have a create upgrade recommendation, or any idea idea would be very helpful.

    Thanks in advance.

    great it worked, I did it without the spare disk, as I want to add it to RAID later, together with to others, so I have to resize it only once.

    I cannot select it in the GUI to mount it, is there a recommended way to do it?

    This is the output:

    Code
    root@omv-j5040:/home/Simon# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sde[1](S) sdg[3](S) sdf[5](S) sdb[2](S) sdh[4](S) sdd[0](S)
          82033508352 blocks super 1.2
    
    unused devices: <none>

    currently, a sixth disk is plugged in which I tried to add to the RAID but with an error message, not is added as a spare:

    This is the output when I stop and assemble it again:

    I suspect I have to force it, or can I see why these disks are busy?

    Hi together,


    I have been using a RAID5 with 5 disks without issues for over a year now, but today a moved everything to a new case to get more space and increase the RAID to maybe 8 disks later.

    After I moved everything I had several issues with disks not beeing recognised, but this is an issue with the SATA expansion card mith 2 Marvell 88SE9215 where only 1 is usable.


    Now all my disks from the RAID are recognised and in good condition in the GUI but the SW RAID is not shown on obviously then the file system on the RAID is marked as missing.


    Then I have checked mdadm in the terminal, the output shows that it is defined as a raid0 instead of raid5, but the raid devices are the correct ones

    When using mdadm --examine on each disk, I see two different pictures, two disks have newer update time with 3 disks missing, and 3 disks have older update time, but when all disks where still active.


    Older superblock of a disk

    Newer superblock of a disk

    mdadm.conf:


    I tried mdadm --assemble /dev/md0, it takes some time without output, and the details are still the same.


    I very much believe there is an easy fix for that, as it looks its just that the informations on the disks are not consistent. But I am carefull, as I don't want anything to be lost (I don't have an backup, because it is nearly 50TB, but the files are also not "unique" but it would take a lot of time to get them back)


    Does anyone know what I could try to fix the RAID5?


    Thank you very much in advance.