Long ago, we in HPC recognized the inevitability of the Attack of the Killer Micros, and with few exceptions, all of today's HPC systems are based on some variant of commodity microprocessors and their commodity cousins, GPUs. A similar revolution swept the secondary storage market. After all, the I in RAID standards for inexpensive, a synonym for commodity.
Like the Cheshire Cat in Alice's Adventures In Wonderland, it seems increasingly clear that in the high-performance, low latency interconnection network space, we will be left with nothing but the smile. Simply put, the last remaining non-commodity component of HPC clusters – the high-performance interconnect – is in grave danger, due to the global economic downturn and the price-performance pressures of commodity Ethernet.
Remember Coaxial Cables and BNC?
The old geezers among us, including yours truly, remember when Ethernet meant coaxial cables and CSMA/CD at 10 megabits/second. (Here's a shout out to my colleague Chuck Thacker, one of the co-inventors of Ethernet.) Since then, we have seen many generations of advances in transport fabrics, switches and routers and speeds.
One gigabit Ethernet is now a commodity item, contained on almost all PC motherboards, 10 gigabit Ethernet is the high bandwidth standard, and 40 gigabit and 100 gigabit Ethernet standards are under development by the IEEE. Infiniband is one of the last major competitors to 10 gigabit Ethernet, and even the Infiniband vendors are increasingly offering Ethernet compatibility (i.e., so-called converged fabrics).
Cloud Data Centers and HPC
I invite you to ponder your response to the following question. What is the difference between a megascale data center and a petascale computing system? Increasingly, the answer is "not much, but a few elements of the software stack."
Of course, this need not be the answer, but it likely will be unless we change our research and development strategies and also our procurement models. Ironically, megascale data centers could also benefit from lower latency, higher bandwidth communication fabrics with scalable bisection bandwidth. It is time we bring these two together, for the HPC community could leverage the economics of megascale data centers and their needs.
Just discovered your blog via the Communications Blog and I am loving it. Very well written. I'm not in the HPC field but I am a software engineer and just wanted to thank you for the very well thought out, lucid, original posts.
Posted by: Adam | June 01, 2009 at 05:43 AM