Omni-Path Switches at Supercomputing 15 Supermicro and Dell

by Jack @ UNIXPlus November 24, 2015

It was clear at SuperComputing 15 that Intel had two main things in mind to promote: Knights Landing, their new Xeon Phi product, and Omni-Path, their new 100 Gbps network fabric aimed squarely at Infiniband. Omni-Path as we have published before will be available as an add-in card as well as being part of the Knights Landing co-processor cards (so you can buy Xeon Phi and it comes with Omni-Path as an outside connector). A few interesting things came out at SC15 worth sharing – switch designs and rack mockups, as well as a few disclaimers worth mentioning.

These are the default Intel switches, in 48-port and 24-port variants in a 1U form factor. Each has two power supplies built in for redundancy, with a network managing port as well as a USB port on the left hand side. For the initial release, customers can buy these switches either direct from Intel or one of their partners, and at the beginning they’ll be pretty much the same with a slight rebrand:

Of course, each has their own model number to play with:

The difference will be in the support packages for their customers, I suspect. Perhaps not unexpected, both Supermicro and Dell also use the same information plaques for the swiches:

As you can see, each port supports QSFP28 type cabling, with a redundant fan and power supply. Intel is stating a 100-110 ns port-to-port latency (includes per-hop error detection and resubmit without forcing an end-to-end resubmit) along with QoS, dynamic lane scaling, and up to 195 million MPI messages per second per port. The other swtch in the family is this 9U behemoth, mocked up through Intel’s news channel:

Alongside the switches are the cards, of which we saw an engineering sample back at IDF. It turns out there seems to be two cards on offer, one for a PCIe 3.0 x8 link and another on PCIe 3.0 x16:

The difference here is that the x16 card gets a larger heatsink, and the x8 gets a sticker on one of the chips on board. Dell had the x16 card in a half-width dual processor node on show.

I found it slightly amusing that on every display for Omni-Path hardware, there was this little disclaimer saying that the technology is still waiting for FCC approval:

Intel specifications for maximum MPI messaging rate for the Intel Omni-Path Host Fabric Interface Silicon 100 Series ASIC (formerly code-named Wolf River). This device that has not been authorized as required by the rules of the Federal Communications Commission, including all Intel Omni-Path Architecture devices. These devices are not, and may not be, offered for sale or lease, or sold or leased until authorization is obtained.

I quizzed both Charlie Wuischpard, VP & GM of Workstation and HPC at Intel, and Raj Hazra, VP & GM of Enterprise and HPC Platforms Group at Intel, about this. Part of my brain was thinking that the wording was a little odd (I’ll be honest, it doesn’t even read that well), and the fact that if Intel is shipping Knights Landing in Q4 to customers then the Omni-Path part of KNL needs to pass the standards and that time is approaching rapidly. Intel has had years of practice dealing with FCC regulations, so I would have assumed that despite the wording and my doubts, it shouldn’t be an issue. Both Charlie and Raj confirmed that this should be the case, and everything is still going to schedule internally, but this qualifier was needed purely for legal reasons. 

Intel’s aim for Omni-Path is directed at Infiniband, attempting to combine their current HPC/server ecosystem with a fabric that saves customers money. As well as being high performing, Intel plans to focusing on metrics that matter to their users. It is interesting that there are rumblings of Omni-Path working its way down to Xeon CPUs in the future, and being equipped there as well.  This makes it all the more interesting when we went to hear what was being said at the Infiniband presentations during the conference. 


Original Article Found Here:

Jack @ UNIXPlus
Jack @ UNIXPlus