Posts by bunducafe

    I don't believe it is a different in OMV 6 and 7. I think the difference is between the docker-ce package for Debian 11 and 12.


    I just copied your airsonic and navidrome yml to my omv7 dev system (amd64( and the web interface works fine on both. I'm not sure why armhf has an issue. The plugin is installing docker the same way.

    Thanks for giving it a try. Then it might be really an issue with Debian 12 and the armhf environment.


    Meanwhile I rolled back to OMV6 on Debian 11 and everything works as expected. I will probably just leave it as it is for now. The HC1 is just perfect as a little media server and even my wife does not complain because it's tiny and noiseless ;)

    You can't get into the OMV web interface or the container web interface?


    I will mention that you HC1 is armhf/32 bit and you will likely have a harder and harder time getting docker things in general working on it.

    Indeed it is the container web interface.


    And yes, it seems that this is going to be tricky. Maybe I will roll back to OMV6 from the backup in order to keep things going the way I want.

    gderf : Well, maybe that was misleading. They have the correct format in my yaml-files otherwise I would not have been able to have working containers and I did a quick copy and paste. Evidently they had not the correct formatting that's why they're a bit messed up here. Mea maxima culpa.


    I was just wondering why the same procedure with the identical yaml-files (with correct formating) I had in OMV 6 are now resulting on the OMV 7 instance in a inaccessibility to get into the webgui. When I have access to the server again I will look for the logs. So I am out of here now and let's cross fingers that somebody has a clue for the original OP.

    Is this three seperate yml files?

    Yep, I just wanted to facilitate things and of course they all have the right formatting otherwise I wouldn't have 3 working containers.

    Obfuscating things like PUID, PGID, disk mount point directories, etc. is completely pointless, can mask actual errors, and make it difficult to help you.

    Hm, I keep that in mind... but as stated above, I had it working flawless on OMV 6 including a working GUI accessible via URL

    It would be more appropriate to start a new thread instead of hijacking someone else's.

    Good point but as it is quite the same issue I did not entirely thought I would hijack it


    It would be better to post the log files along with the path they came from than to assume they don't have anything useful in them and not post them.

    Correct. I will have a look into that and would then probably open a new fred with the logs.

    Here we go. It's actually just these ones:


    On the Asus board with an N100 you can install an adapter to 6 SATA ports in the minipcie port like this: https://es.aliexpress.com/i/1005005068881876.html


    and a network adapter on the pcie port like this: https://www.amazon.es/Adaptado…Controlador/dp/B0BNHWZBCC

    or dual like this: https://www.amazon.es/Adaptado…controlador/dp/B0CBX9MNXX

    Yep, that would be the only possibility for my use case. I just do believe that this little board might then consume much more power. Momentarily I am sorting things out.. and maybe something entirely different comes up my way - I even consider an ATX Board that can do low while the nas is idling even if the build itself would be a significantly bigger than my old machine.

    This is very similar, but with a more current CPU. The difference is that you will have to mount the heatsink yourself. https://androidpc.es/placa-base-pasiva-intel-n100-i3-n305/

    This one seems indeed promising but it is from an unknown manufacturer and the only commentary says it's a weird bios and difficult to get drivers.

    As I might keep the Helios64 as Backup Server I was wondering if anybody could give me a tipp with a motherboard that has 2.5GBs LAN. The N100 ones are ruled out here because I the only possibility to get more SATA ports is done via PCIe (correct me if I am wrong).

    For you, I suggest you get the smallest 250 watt PSU that has the typical ATX power cables. *IF* you're really only going to run 3hdd + 2ssd then you could drop to 150 watt (with a N100 board at least).

    Yes, the 3 hdds and 2ssds would be the max. I am coming from 4 x 4TB with one parity so I can easily achieve decent storage space with 3 x 12TB or even 16TB hdds.


    Power unit wise I am fine either way. Just wanted to know if there is a particular advantage in using the ATXs or not. If I will go a little bit more future-proof then the ATX variant seems to be better - what leaves me with the ASUS board at this very moment ;)

    Hi folks,


    as I am about to rethink my setup here as well I kind of hijack this fred because it's about more or less the same usecase, running a handful of media dockers and use the machine as data storage (archival).

    I currently stick with a helios64 that runs quite smoothly. As this thing is not maintained anymore and I would have to buy bigger harddrives, I was thinking of changing the setup completely. The initial idea to get a HPE Microserver was a bit boring on the long end so I would rather get the pieces myself and mount it in a decent case. I love the kobol case but it would need too many mods that I am not willing to do at this very moment.

    Because of the nice read in chente 's comparison why he got a new 100N mobo - well here was my thinking to build something similar.

    Two things:
    - I intend to have 3x data disks and 1 SSD (currently I am with 4 4TB disks, Snapraid / MergerFS + 1 SSD for the music collection) -> 4 bays

    - In terms of power: Is there an advantage to have a ATX PSU vs. the brick one that resides outside the case?

    Of course I would love to have a power efficient NAS but I don't want to run into scenarios where power peaks get the thing down. The new NAS should run with 3 spinning and 1 SSD - maybe a Cache M2 SSD... that depends not on the mobo I am about to get.

    Any thoughts on this?

    Krisbee Thanks for the info. I already had a quick look into the links... now I am even more tempted to get the thing... indeed it is kind of a bargain to get the Microserver which not really helps in decision making :)


    So if, in the far future, I would play around with VMs, would it then be feasable to upgrade the CPU? Or is that kind of tricky / impossible here?

    Sorry for reviving this thread, but...


    I could get a Proliant Microserver Gen10 plus v2 with a Pentium Gold. I am somehow tempted to replace the meanwhile unsupported Helios64 even if it runs smoothly on my hand. The question would be: Are 2 cores enough for a NAS? And: Would the HP Microserver outperform the ARM64 machine that I have? Corewise the Helios64 has 6 but I somehow think it has also something to do with the use case.


    Actually I store all my data within encrypted drives on the NAS. It acts as a media server mainly for music on a ssd but does also serve for movie streaming once in a while from other 3,5 spinning drives. I use some media dockers and sometimes jellyfin is laggy and Photoprism never worked smoothly with the Helios64


    So, would the HP significantly push me forward within that use case? Or would it be wiser to keep the Helios64 with moderate to low power consumption? Also to take into account would be the RAM. HP has 16GB and the Helios64 just 4GB.

    votdev Not a nightmare but sometimes things do not work due to old adjustments.


    Francobritannique Did you ever touch the etc/nsmb.conf file on your Mac in the past?! If so, maybe copy its content, save it somewhere as a text file for later and then delete it. Reboot. Reconnect the samba shares.


    macOS requires SMB 3 protocol strictly and if you changed that file in the past it might be the culprit.

    I don’t have any issues with Samba and macOS 14.1. Could you describe how you connect to your server?


    Hitting CMD + K and enter SMB://ipofserver and then putting in your credentials?


    Do you see the samba share in the finder at all?


    As other machines are finding the share and be able to access it without issues I do believe it is setup properly but we have to find out what you might changed for the access of your MacBook to the share

    Thank you for this one... Just two things:
    So the backup process has changed, right? Instead of the files and folders I do now find two tar.gz files that are used to regenerate the machine if set up new.


    And: I did set up a scheduled task for copying the files "the old way". In the new GUI I did not find any possibility to set up a new scheduled task with the changed parameters. No big thing to me as I will then once in a while just use omv-regen and do the backup manually but I just wanted to be sure :)


    Thanks for the great work.

    See above post and modify the config file to your needs. You should get the paths of your shared folders in the OMV GUI.

    I just wanted to shout out a big THANK YOU to all of you folks who are steadily involved with OMV and its development and the tireless support you give here every each of us in the Forum. It's just incredible...

    So, even if I did not want to name a few because I certainly will forget very important team members as well, I cannot resist (and sorry for forgetting some but maybe I get some help here, too):


    ryecoaaron  votdev  Soma  chente  macom  gderf  KM0201  Spy Alelo  crashtest  donh  subzero79  WastlJ  Agricola


    The compose-Plugin rocks, I finally could get rid of portainer and the docker management - at least in my case - is way easier and more stable.
    And for those users who are unaware of the donation page, here is the link again: https://www.openmediavault.org/donate.html


    Thanks again for all your efforts.