Posts by Spy Alelo

    Check inside the share and make sure there isn't a file named ".metadata_never_index". Your Mac won't be able to see it by default, so try to do it over the Debian CLI.


    *Edit: If the file exists, remove it by using this command: "sudo rm -rf /.metadata_never_index"

    I wanted to share a bit of info with ya'll in terms of cost/performance.


    The ML30 and DL20 servers happen to be built upon Intel's very own architecture Greenlow. The good news is that this is a server class chipset, with all of the HPE management and enterprise stability that we all crave. Is also the most inexpensive lineup, and you can use any consumer class Skylake processors currently available, from the classic Pentium all the way up to i7 and Xeon E3. Basically you can buy yourself a basic model and build it up if you happen to have the extra parts. Here are some highlights:


    • DDR4 UDIMM ECC supported (cheap nowdays) with a maximum of 64GB
    • ECC DDR4 RAM is optional, NOT required
    • From consumer to enterprise class processor support based on Skylake
    • CPU with IGD is NOT required, iLO acts as the system GPU
    • Embedded iLO4 management
    • Very quiet for home and office use
    • Dual embedded gigabit NICs
    • Built for 24x7 operations
    • Full UEFI support!
    • 6 SATA AHCI/RAID ports, X4 on mini-SAS connector, X2 onboard SATA ports
    • Embedded micro-SD card slot (not recommended for OMV, but excellent for ESXi or embedded pfSense)


    They both have options for storage backplanes if you like. 100% compatible with Debian 8.xx on UEFI or Legacy BIOS modes as well (I tested this) and you can add the Ubuntu/Debian repositories to add the monitoring tools that interact with iLO. The ML30 has the advantage for home use since no rack is needed and can hold more drives. Also, just between us, a little bird told me that 7th generation Kaby Lake processors will be supported. :D


    Additional specs:


    ProLiant DL20 Gen9
    ProLiant ML30 Gen9



    It appears like the interface is down. Is possible that both devices may not support MDIX, or as simple as the Ethernet cable not being all the way in. Double check the connections to be sure.


    *Edit- my mistake, I skimmed through the info too fast. Can you show the state of the interface?


    ifconfig eth1

    Because repeating something someone else is not childish enough right? Stop the useless flamewar on this thread, this is not what the support forums are for. You've been warned.

    Except for the described little issue my "useless bridge config" works perfectly as expected, even when you negate its use. Because i don't know how and why iLO and/or its port sharing generates this issue, i asked you as the HPE-expert. Obviously you also don't know it. That is no serious problem - i can live without an answer. How about you?

    Except it doesn't work as expected for you. You can make any assumptions you like about me, is not going to change or improve your situation. I actually find it hilarious that you think I don't know what's up lol.


    Go get yourself a real switch and stop trying to prove something you are not. You shouldn't be using enterprise equipment if you can't make heads or tails from the info I gave you. If you want to save 3 watts of power on a switch, you are doing it wrong. You are contradicting yourself by using an enterprise solution that has a monitoring SoC that is powered 24x7.


    Now let's stop this right here and move on to do something productive. There are more people that will truly appreciate any help they can get and you are on the way.

    Because the question was not how you would connect devices, i gave you all necessary information - except the matter of course, that only NIC-port 1 is connected to the external switch. I have the feeling, that you should better just answer "i don't know" in case you don't know an answer.
    BTW: A bridge makes the host with the bridged NICs work as a switch - and another NIC or another external switch and also the additional consumed power are not free of cost...


    BR
    Jan

    Obviously you don't know how iLO works to prevent the issue, but I'll leave it at that since you seem to be so knowledgeable and capable of resolving it yourself. And thanks for the lesson on a useless bridge config. I have a feeling, that you should quit while you are ahead.


    If you ask a question I'll try to answer the best I can to help people in this forum. Knowing the purpose of what you are trying to do helps me and others to think of alternatives to your problem as well. Don't like the answer? Then look somewhere else, don't get clever with those trying to help out.

    I don't know why you have bridged the interfaces, but if they are both on the same switch you will have some major issues passing traffic for certain things and iLO would be one of them. You have to realize that the fact that iLO shares a port with the system's NIC doesn't mean it makes the port a switch. I have a feeling that the iLO ARP record may not be passed along or the wrong record is presented to the VMs.


    Again, I am not sure why you bridged them but I would troubleshoot this by removing br0 and only use eth0 (where iLO is by default). Remove eth1 from the network temporarily and re-assign eth0 as the interface to be used by the VMs.


    If the VMs can now access iLO, you may want to rethink your layout. Bonding the embedded interfaces to each other is not recommended since you will encounter yet again a similar issue with iLO due to potential ARP issues. Here are a few suggestions:


    1. If you want redundancy, add an additional NIC and create a bonded interface between the card and eth1. Use balance-tlb or balance-alb if you don't have a switch that allows you to create LACP groups. Do not assign eth0 to be used by your OS, but have it connected to the network anyway. This will let your switch handle all the ARP records and iLO should be reachable by anything, and yet you still have redundant interfaces.


    2. If you don't need redundancy, but still would like to have different networks to be manageable dynamically per each VM, then implement a couple of VLANs. This of course would require extra hardware such as a manageable layer2/3 switch and configuring the VLAN logical interfaces on eth0. iLO is VLAN happy as well, so you can have it anywhere you like.


    3. Intel NICs usually behave better than Broadcom when managing this type of stuff, but you could get any 2-port or 4-port NIC and disable the onboard one. You can bond the ports as I suggested previously from within the very same card, and iLO will STILL work on the 1st port of the embedded NIC even after disabling it. Basically it will act as a bridge for iLO only and every VM should still reach it.


    I have a feeling that you may have to explain better why you have created that bridge, but this is the best I could come up with the little information you gave me.

    I have some pretty straight forward answers for you:


    1. Yes it is possible, but is very complicated to implement unless you know how
    2. You would have to use the Linux IP Tables to do routing from two interfaces


    Even if you could do it, is not actually practical and you add dependencies that you shouldn't have. Instead, get an inexpensive network switch and that way you can share your only RJ45 drop. It will be easier, simpler and quicker.

    OMV3 has the flashmemory plugin that has similar function like this.

    The guide I linked is not to minimize writes. Is meant to simply show you how to schedule a TRIM job which is exclusive to SSD drives only. Technically though, you could use both the plugin and schedule the TRIM for longer life.

    A G6 server with a controller from that generation will easily take a 2.5" SAS or SATA drive of any currently available size. I think the largest I've seen on that size is 3TB, and yes, the server will not have a problem with it.


    There's a caveat though. You can use any drive larger than 2TB as long as is not a boot drive. For you to be able to boot a drive above that size, you would need UEFI and that's only available on Gen9 servers.


    Just use a small SSD for boot and use the rest of the bays for your array, and you'll be all set.

    @Spy Alelo - I have acquired a DL360p Gen8 with a D2700. Will this config work with OMV? I won't be able to pick the hardware up until later in the month. The server is taken from a production environment with all bells and whistles, except it's running Windows now.


    That's an awesome config and yes it will work. I advise that you build you array(s) with the storage controller instead of OMV so you can assign hot spares and be able to do hot swaps it also rebuilds faster than OMV can and has a battery cache module in case of power failure good stuff.


    Great! Thanks for your time :)


    I'll try OMV 3 on this one I reckon.


    Now the question becomes how it'll handle mixing SAS and SATA-drives...


    It will handle mixed drives just fine, but you don't want to mix SATA and SAS on the same array. As long as you keep them on separate arrays (you can create as many as you want) you will be okay. You can also use the very same controller to have all the arrays and mixed drives without an issue. It will also work with SSDs and allow you to adjust over provisioning, but make sure that you run the latest SPP before you set it all up.


    Since SPP is reserved to customers that have service contracts only, I might think about it and maybe add a link one of these days for anyone to download. Maybe, maybe not ;)

    You are going to have to tell me which server that is, but all enterprise Gen8/Gen9 all LEDs are the same:


    Status: Red, Orange or off
    UID: Blue, blinking blue or off
    Power: Green, blinking green or amber. Only off if there's no power to the server.


    If you mean the Gen8 Microserver lightbar, there's no way to turn off the LED at all. The status bar is controlled by the iLO computer only.

    Which one? If you are talking about the UID LED, it is only on if you press it (is actually a button) or it blinks if you remote into the iLO KVM. iLO can also turn it on/off from its interface.