Beiträge von bunducafe

    On the Asus board with an N100 you can install an adapter to 6 SATA ports in the minipcie port like this: https://es.aliexpress.com/i/1005005068881876.html


    and a network adapter on the pcie port like this: https://www.amazon.es/Adaptado…Controlador/dp/B0BNHWZBCC

    or dual like this: https://www.amazon.es/Adaptado…controlador/dp/B0CBX9MNXX

    Yep, that would be the only possibility for my use case. I just do believe that this little board might then consume much more power. Momentarily I am sorting things out.. and maybe something entirely different comes up my way - I even consider an ATX Board that can do low while the nas is idling even if the build itself would be a significantly bigger than my old machine.

    This is very similar, but with a more current CPU. The difference is that you will have to mount the heatsink yourself. https://androidpc.es/placa-base-pasiva-intel-n100-i3-n305/

    This one seems indeed promising but it is from an unknown manufacturer and the only commentary says it's a weird bios and difficult to get drivers.

    As I might keep the Helios64 as Backup Server I was wondering if anybody could give me a tipp with a motherboard that has 2.5GBs LAN. The N100 ones are ruled out here because I the only possibility to get more SATA ports is done via PCIe (correct me if I am wrong).

    For you, I suggest you get the smallest 250 watt PSU that has the typical ATX power cables. *IF* you're really only going to run 3hdd + 2ssd then you could drop to 150 watt (with a N100 board at least).

    Yes, the 3 hdds and 2ssds would be the max. I am coming from 4 x 4TB with one parity so I can easily achieve decent storage space with 3 x 12TB or even 16TB hdds.


    Power unit wise I am fine either way. Just wanted to know if there is a particular advantage in using the ATXs or not. If I will go a little bit more future-proof then the ATX variant seems to be better - what leaves me with the ASUS board at this very moment ;)

    Hi folks,


    as I am about to rethink my setup here as well I kind of hijack this fred because it's about more or less the same usecase, running a handful of media dockers and use the machine as data storage (archival).

    I currently stick with a helios64 that runs quite smoothly. As this thing is not maintained anymore and I would have to buy bigger harddrives, I was thinking of changing the setup completely. The initial idea to get a HPE Microserver was a bit boring on the long end so I would rather get the pieces myself and mount it in a decent case. I love the kobol case but it would need too many mods that I am not willing to do at this very moment.

    Because of the nice read in chente 's comparison why he got a new 100N mobo - well here was my thinking to build something similar.

    Two things:
    - I intend to have 3x data disks and 1 SSD (currently I am with 4 4TB disks, Snapraid / MergerFS + 1 SSD for the music collection) -> 4 bays

    - In terms of power: Is there an advantage to have a ATX PSU vs. the brick one that resides outside the case?

    Of course I would love to have a power efficient NAS but I don't want to run into scenarios where power peaks get the thing down. The new NAS should run with 3 spinning and 1 SSD - maybe a Cache M2 SSD... that depends not on the mobo I am about to get.

    Any thoughts on this?

    Krisbee Thanks for the info. I already had a quick look into the links... now I am even more tempted to get the thing... indeed it is kind of a bargain to get the Microserver which not really helps in decision making :)


    So if, in the far future, I would play around with VMs, would it then be feasable to upgrade the CPU? Or is that kind of tricky / impossible here?

    Sorry for reviving this thread, but...


    I could get a Proliant Microserver Gen10 plus v2 with a Pentium Gold. I am somehow tempted to replace the meanwhile unsupported Helios64 even if it runs smoothly on my hand. The question would be: Are 2 cores enough for a NAS? And: Would the HP Microserver outperform the ARM64 machine that I have? Corewise the Helios64 has 6 but I somehow think it has also something to do with the use case.


    Actually I store all my data within encrypted drives on the NAS. It acts as a media server mainly for music on a ssd but does also serve for movie streaming once in a while from other 3,5 spinning drives. I use some media dockers and sometimes jellyfin is laggy and Photoprism never worked smoothly with the Helios64


    So, would the HP significantly push me forward within that use case? Or would it be wiser to keep the Helios64 with moderate to low power consumption? Also to take into account would be the RAM. HP has 16GB and the Helios64 just 4GB.

    votdev Not a nightmare but sometimes things do not work due to old adjustments.


    Francobritannique Did you ever touch the etc/nsmb.conf file on your Mac in the past?! If so, maybe copy its content, save it somewhere as a text file for later and then delete it. Reboot. Reconnect the samba shares.


    macOS requires SMB 3 protocol strictly and if you changed that file in the past it might be the culprit.

    I don’t have any issues with Samba and macOS 14.1. Could you describe how you connect to your server?


    Hitting CMD + K and enter SMB://ipofserver and then putting in your credentials?


    Do you see the samba share in the finder at all?


    As other machines are finding the share and be able to access it without issues I do believe it is setup properly but we have to find out what you might changed for the access of your MacBook to the share

    Thank you for this one... Just two things:
    So the backup process has changed, right? Instead of the files and folders I do now find two tar.gz files that are used to regenerate the machine if set up new.


    And: I did set up a scheduled task for copying the files "the old way". In the new GUI I did not find any possibility to set up a new scheduled task with the changed parameters. No big thing to me as I will then once in a while just use omv-regen and do the backup manually but I just wanted to be sure :)


    Thanks for the great work.

    See above post and modify the config file to your needs. You should get the paths of your shared folders in the OMV GUI.

    I just wanted to shout out a big THANK YOU to all of you folks who are steadily involved with OMV and its development and the tireless support you give here every each of us in the Forum. It's just incredible...

    So, even if I did not want to name a few because I certainly will forget very important team members as well, I cannot resist (and sorry for forgetting some but maybe I get some help here, too):


    ryecoaaron  votdev  Soma  chente  macom  gderf  KM0201  Spy Alelo  crashtest  donh  subzero79  WastlJ  Agricola


    The compose-Plugin rocks, I finally could get rid of portainer and the docker management - at least in my case - is way easier and more stable.
    And for those users who are unaware of the donation page, here is the link again: https://www.openmediavault.org/donate.html


    Thanks again for all your efforts.

    I'd say: it depends. I have a low power NAS in the office and when I am longer than 6 hours not using them the harddrives do spin down. I also have a SSD in the NAS as principal hdd therefore it happens that the disks are not spun up sometimes even for two weeks.


    Some of my disks are older than five years and work flawless. At least from the specs of the disk wear leveling should not be an issue but I do know that a lot of people do let their disks spinning even when not accessed.

    docker stop container_name

    Jep, that's what I did in the meantime... ;)

    Is there a reason you need them all in one stack/compose file? Having them separate, also makes them easier to update, as you don't have to redeploy the entire compose file, just to update one container.

    ... but as it is sometimes just nice to use the GUI I will split the stacks now and have everything within one tab.

    I was pretty heavily in stacks.. but they were super easy to move to the new plugin. I like it.

    I am kind of the same especially it is easier to map the ports when adding services that depend on a different service.


    So far everything is running, just one thing: If I want to stop a container I have to go to files and press the down arrow. As I have made various containers with one single stack I can only stop all containers. It would be handy indeed to be able to stop only a desired container instead of stopping all of them. Is there a way? If not, I would most probably split the compose files in order to be able to start and stop specific containers...


    Otherwise: Great job ryecoaaron. Works flawless.

    I just did the following steps and it fixed it:

    Code
    sudo apt install apparmor -y
    sudo service apparmor restart
    sudo service docker restart

    Just one question, maybe ryecoaaron can answer: I read a lot about not installing apparmor package is not recommended with a OMV install. I see a lot of users installed it and they are happy that their docker containers work again - and they don't complain about further issues (at least I could not find it). Is there any good reason to avoid installing apparmor if it does the job? Securitywise for example? Is it more likely to break the docker install done via OMV?

    encrypting the system disk actually helps because nothing starts until it is decrypted. And usually the system disk and data disk get decrypted at same time avoiding the issue. When you don't encrypt the system disk, it fully boots and won't wait forever for the data disks to be decrypted. When this happens, services try to start but their data filesystems are not there. There are many, many threads about this on the forum. This is by no means a rare problem.

    I second that. But I don't see that this is indeed "a problem". I repeat myself: If you do need encryption, encrypt. Of course encryption comes with the cost of having to put a little bit of effort once your machine is booting up - for me that is a non issue and doesn't take too much time at all:


    - (Re-)Boot the machine

    - decrypt the disks

    - restart mergerfs

    - restart docker


    Either done via the GUI or CLI and it takes not more than 2 minutes.


    Overall I am happy with the performance on my system. Via WiFi I get around 45MB/s and with LAN connected it doubles the speed of the reads. Writes only differ marginally but that's absolute fine for me.