Quick question for you.
I have got OMV running on Microsoft Azure for quite a while (3+ weeks).
I had absolutely no issue with that and i switched between different VM sizes with no issues at all.
With the network ACL i can control the endpoints exposure and allow acces just to myself.
To do that so did create the VM in Hyper-V and then uploaded the data drives and the OS drive to an Azure storage account.
Suddenly, last weekend, I have lost access to the VM, even after allowing all traffic I still cannot reach the VM on any port.
I have therefore downloaded the VHDs and created a Hyper-V VM; in this case everything works as it should.
This confirms the system is fine and the issue is specific to this VHD running on Azure.
I was suspecting a networking issue (such as the VM not getting a private IP from the Azure DHCP), therefore, after downloading the VHD i looked into the /var/log directory hoping for hints.
Here in the boot log I found:
This explains why the public IP was reachable but the requests, once forwarded to the VM, were timing out.
When running on Hyper-V, instead, eth0 is found and is brought up.
Now, given in Azure there no console access, I need to solely rely on logs to troubleshoot this further.
I'm wondering, is there any log that shows me what devices are discovered during the boot?
I can't rely on lshw as I have no SSH or console access (given an IP is not assigned tp OMV) and must only rely on logs.
Perhaps the NIC may not be called anymore eth0 and may be identified as p4s1 or something else; so, if I manage to find this out I may just add an entry on the /etc/network/interfaces in Hyper-V and then re-upload the VHD.
Any help would be really appreciated.