Posts by Linux13524

    Hi @jollyrogr. Thanks for your answer :)

    Its not only the spinning of the disk, there is also permanent (hearable) access on them, what prevents them from spinning down.
    SSD are to expensive for me (even though they got a lot cheaper), relocation is difficult for my small apartment (living room is better than sleeping room), so earplugs would be the best solution against the noise..

    But Im still worried about the permanent access on the disks:
    iostat shows a recuring write access on the HDDs (its RAID-Z with sdc, sdd and sdf):

    Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
    sda               0,00         0,00         0,00          0          0
    sdb               0,00         0,00         0,00          0          0
    sdc              21,50         0,00       270,00          0        540
    sdd              23,00         0,00       258,00          0        516
    sde              20,00         0,00       272,00          0        544

    Hi. I have problems keeping my HDDs in standby mode. Since my NAS sits in my living room it is very annoying to here the hdds spinning and working.
    The good thing: I know the services that keep the HDDs online. the bad thing: I don't know how to fix them.
    So there are two services I identified to be problematic: Gitlab (docker) and Nextcloud (no docker). I moved all gitlab docker volumes except the data volume (/var/opt/gitlab) to my system drive that is a SSD and for nextcloud I only have the data folder on the HDDs, too. For docker it worked moving ALL volumes to SSD (I did for Pi-hole), but I want the gitlab data to remain on HDDs. For nextcloud the problematic process seems to be php-fpm what shows up in iotop.
    Does someone has an idea how to fix it or had a similar problem? If you need more informations about my setup let me know...

    EDIT: I also have the strange behaviour, that the HDDs spin down immediately after I stoped the problematic processes. I set APM to 127 and spindown to 30 min. For debugging the problem this is nice, but I think not very healthy for the drives...

    Ah ok, I see. And would it be possible for OMV to change the settings of resolved on installation? Or add an option to the webinterface... Because with blocked DNS port there is no possibility to run any DNS server and I think this will be an issue for more people than just me ..

    Hi. I moved from OMV 4 to OMV 5 recently and when I try to start the docker-compose file that worked for Pi-hole on OMV 4, I now get an error regarding the tcp port 53 used for DNS. The port is already in use by systemd-resolved. When I stop the service I can start the Pi-hole container, but whenever I restart the system, Pi-hole can not start. I tried disabling the systemd-resolv service, but then I dont get DNS resolutions anymore on the system. So what did change with OMV 5 and what can I do to free port 53?

    @macom No never

    fdisk lists the disk as following:

    Device     Boot    Start      End  Sectors  Size Id Type
    /dev/sda1  *        2048 46465023 46462976 22.2G 83 Linux
    /dev/sda2       46467070 62531583 16064514  7.7G  5 Extended
    /dev/sda5       46467072 62531583 16064512  7.7G 82 Linux swap / Solaris

    Hi. I recently installed OMV 5 to my system and now Im trying to shrink my system partition to create a second partition.. In the web interface the system partition is labeled as ext4, but when I start gparted live it is shown as zfs and I cannot resize it. What is the actual partition format of OMV 5 and how can I resize it properly?

    I have a similar issue: "Error: "Could not resolve host: http:" - Code: 6"


    Clearing browser cache does not help..

    Any ideas?