Beiträge von delpiero3

    Already tried both, doesn't change anything. Tried different DNS, like 1.1.1.1, doesn't change anything. My router is rebooted once a month, it is OPNSense, and i apply updates.

    Tried it one more time today, and suddenly, all works, in DHCP or static. Don't know what happened, spend almost a day on this one.

    Thanks for asking me this, so that i could try one more time and finally got the issue fixed.

    Cheers.

    Hi everyone,


    hope you are all doing good ?

    Recently, i noticed that i wasn't receiving anymore emails from my OMV installations with updates.

    After connecting it through SSH, i noticed that i couldn't ping http://www.google.com. I started to check resolved service and avahi, both seems to be up and running :


    I tried to reset my network configuration using omv-firstaid, using netplan, nothing is working. When i do "cat /etc/resolv.conf", all looks good as well :


    I am not sure what am i missing here ?


    last note, i noticed that SMB sharing through hostname doesn't work either.

    I am using a bunch of docker, that seems to work without any issue, all docker running on that OMV instance can resolve domains.

    Thanks for the help.

    cheers

    I am using the setup i described above since 2 weeks now. I really struggle in moving from software raid in OMV to hardware RAID and Proxmox on top of the rest, but now my setup is working like a charm.

    I haven't notice any performance drop so far, but i am currently only in Gigabit. I bought 2 10GB NIC to make some tests, we will see how it goes. i still have some spare servers i would like to compare OMV native VS Proxmox and OMV VM. But so far, i am please that i only have one machine running 24/24

    Hi Votdev, thanks for taking the time to look to that.


    There is the result of the command that i hope i run correctly, first time i run python3 interactive command, not very familiar with this stuff :



    Maybe i should report this to the smartctl team as well ?

    By the way, i noticed that the smartmontools version installed on my system is the 6.6-1, and there seems that a new one is available in the Debian repository which is the 7.1-1. I checked the changelog, didn't find anything related to my behavior, but wasn't able to update the package despite the fact that the mirrors i am using for sure contains that new version. Isn't that weird ?

    According to the dashboard, i am on 5.5.0-1 but i unfortunately have no more success in the SMART menu :



    Could it be that it is because i have 2 hardware controller in this machine, one internal managing a hardware RAID and another PCI-e configured in HBA mode with a software RAID ?

    votdev , hello hope you are doing good ? I have seen that you merged your improvement branch to the main branch already some weeks ago, but i am wondering is the fix already published through the OMV updates, or when will it be ? I am not familiar when the updates are generated after a merge ? Because currently i am up to date but still see the issue. Thanks for your feedback.

    thank you for your time ryecoaaron, i will try by setting up proxmox on a new system drive, to not touch my omv first, try to mount my data drive in mdadm raid 6 and see how it goes, we will see. Regarding the docker containers then i will keep them running in OMV, doesn't make sense to move them to Proxmox if it is LSX and i have to reconfigure things.

    I know it was a long post, i tried to be really specific on things i have and want to achieve, sorry.

    Will report how it goes, hopefully everything goes well :).

    Hello guys,


    I am coming back to you because i would like to get your advise on my wish to move from OMV 5 bare metal installation type to a Proxmox environment.

    Currently, my setup looks like this :


    - HP DL380 G7

    - Dual Xeon L5640 2,26Ghz

    - 24GB of RAM

    - Internal P410i Raid controller with 512MB cache

    - External P822 Raid controller in JBOD mode

    - 4 HDD 300GB 15K SAS RAID 10 for the system

    - 4 HDD 600GB 10K SAS RAID 5 for the backup of my computers (system image only)

    - 12 HDD 2TB in software Raid 6 connected in a Storageworks MSA60

    - A second Storageworks MSA60 with 12 HDD 2TB currently not configured


    I am running OMV 5 with something like 16 containers (plex, nextcloud, letsencrypt, urbackup ...), CIFS share, UPS connected on it and monitored, no other fancy stuff.


    Aside that setup, i am running an OPNSense firewall on a similar machine. And finally, for testing purpose, i also have a third similar one running a Windows 2016 for test.

    When i look to that, i am wondering about the cost for electricity of course (servers are 120-140W idle according to their respective ILO measures), and efficiency (running such beast for OPNsense doesn't make .... any sense).

    But rather than investing in some new hardware, i have the project to switch to Proxmox all my setup, and for example run OMV and OPNSense on one server only and can like that power off one of them, i also play as i said with a Windows 2016 server for testing, that could also run into the Proxmox.

    now that you know a bit more my usage, i would like to have your opinion about the storage and the overall setup.

    The storage because i read so far that Proxmox doesn't support officially software raid but ZFS, would you switch back your installation to a hardware raid, move all data from one storageworks to the second and then expand the volume with the first once the data are transferred, or would you do the same approach but making a ZFS drive ?

    I am still not decided the way to go, i am also looking for performance of course, as i will switch to a 10GB network at the end or beginning of next year, so the storage has to perform well enough to sustain the bandwidth.

    Regarding the containers now, i saw that Proxmox can handle them too, would you keep them in OMV (everything is already configured, i had the idea of restoring a backup of OMV while moving to Proxmox) or switch them to Proxmox ?

    Basically my final goal would be :

    - OPNSense with 2 of my NICS in passthrough mode (WAN and LAN)

    - OMV with 1 decicated NIC in passthrough as well (for performance and to be 10GB ready when i will buy the adapter)

    - any other NIC shared with other VMs


    So thanks for your help, i found many post about Proxmox migration of OMV, but nothing that really match my use case.


    I will post the same question to the Proxmox forum if you don't mind, just to have their opinion too.

    Hello, Any thoughts about the last input ? Could an admin move my thread to another more appropriate section of the forum, since after investigation, it appears that it is clamav that kills my setup (maybe in plugins ?)?

    thanks.

    Hello all,


    I am coming back to you because since i update from OMV 4 to OMV 5, i am not able to get my openmediavault-nut plugin working anymore.

    I was using it with OMV 4 and got the right notifications, that was no issue, before upgrading the system to OMV 5 i didn uninstall it, since i am with OMV 5, nothing to do, i always get this error when trying to activate the plugin. The installation works, i see the entry in the WebUI and i can navigate in the menu, the only thing is if i try to activate it with the same settings i was using with OMV 4, i get this :


    Hello again all,


    back with my problem on which i found the cause, but not the solution. It is activating ClamAV which really kills completely my nas, it starts with dockers slowing down like hell to end up with a non working system anymore. I don't see any high CPU usage, and i don't really know where to start looking. ClamAV was used with OMV 4 without an issue, i uninstalled it before migrating to OMV 5 and reinstalled it, same parameters (the default one) and "on access scans" to my volumes. Where should i start to look ?

    Hello, don't think it is a question of temperature, because 29°C is definitely a normal one.

    I do think you need to go in the detailed report of the SMART status to find out what parameter is out of range. SMART reports are most of the time not there without a reason.

    I think it is just an incremental value, like you start from /dev/sdc with 0, then /dev/sdd with 1, /dev/sde with 2, and so on ... But nevertheless, I think you got it right about the command, we can see all partitions from all devices using the command :

    And there using a combination of cciss,2 and /dev/sde i obviously get a different drive correctly... Looks like it is a combination of both we have to play with somehow...


    However, i think there is a small issue, if i execute the same command with /dev/sdd for example, which should be another drive, i get the same answer back, look at the HDD serial number ;


    I've created a PR to improve the host driver detection for HPSA HPA. See https://github.com/openmediavault/openmediavault/pull/687.


    Please execute the commands above to confirm that the code works.

    Hi votdev thanks for your help, basically it looks like it returns something as in on of my previous post in this thread where i corrected myself the command, but i placed 1 instead of 0 cciss,1 in post number 4, there is the result with 0 instead, doesn't seem to make any difference and i wasn't sure what is that ID made for from reading the manual, but at least it reads the info from the drive perfectly in that case :


    votdev there is the result for the commands you asked me to execute, obviously no host1 for the second, but HSPA for the last one :


    Code
    root@media:~# realpath /sys/block/sdc
    /sys/devices/pci0000:00/0000:00:08.0/0000:0b:00.0/host1/scsi_host/host1/port-1:2/end_device-1:2/target1:0:1/1:0:1:0/block/sdc
    
    root@media:~# basename "$(realpath $(realpath /sys/block/sdc)/../../../..)"
    end_device-1:2
    
    root@media:~# cat /sys/class/scsi_host/host1/proc_name
    hpsa


    Maybe one remark that is important, my HP DL380 G7 has also an embedded HP Smart Array controller, the P410i, and this one is working in hardware mode (there is no way to switch it in pure HBA). Just in case it makes any difference.