I just spent an all too brief two days in Hamburg, attending the International Supercomputing Conference (ISC). It was a chance to catch up with some old friends, discuss the ever changing nature of computing technology and reflect on the Weltanschauung that is the HPC community. In addition to the buzz around the latest Top500 list and the new Japanese machine atop the list, there were many debates about the intersection of HPC and cloud computing.
My friend and former collaborator in North Carolina, Wolfgang Gentzsch, organized a session at ISC on HPC and the Cloud, where I was delighted to participate. In my talk, I made four points, several of which were echoed by Josh Simons from VMWare in a great talk that followed mine.
First, HPC has experienced several punctuated equlibria, as we transitioned from one parallel computing model to another. From instruction level parallelism in the mainframe era through vector systems, MPPs, SMPs, clusters and clusters plus accelerators, each transition has brought technical and cultural challenges. The technical challenges have been well documented; the cultural challenges are less often discussed, for they affected the technology providers (vendors), the technology operators (HPC centers and facilities) and the technology users (scientists, engineers and other researchers.) Clouds bring another set of technical and cultural challenges.
When any new technology appears, there is a great temptation to see it through the lens of the old, either in nomenclature or behavior (see "horseless carriage"), or attempt to converge the new into a variant of the old (see assimilation by the Borg). Many corporate acquisitions fail for just this reason, failing to recognize one is acquiring something precisely because it is not like the acquirer. Successful adoption and exploitation occur when all parties are willing to work together to help create the ecosystem of capabilities needed. I believe we are moving down the path of successful adoption with cloud-based HPC.
Second, I talked about the need to continue to democratize access to HPC, targeted the "excluded middle," those potential and actual technical computing users who need more computing than available on desktops but who find traditional HPC too difficult to use. They are the majority of scientists and engineers. Client plus cloud acceleration is one possible solution to this problem, and could empower a new set of HPC users. See HPC and the Excluded Middle.
Third, I reminded the audience that there is another burgeoning challenge and opportunity – big data. We have big problems with big data, and the scientific and engineering communities are drowning in the data produced by a new generation of sensors and scientific instruments. We need simple, easy to use tools that allow researchers to mine and extract insight from that data. In that spirit, I discussed our continuing work on Excel DataScope, which allows one to use the familiar Excel interface while applying rich analytics algorithms in the cloud. See The Zeros Matter: Bigger Is Different.
Finally, I highlighted the worldwide progress of Microsoft's research cloud partnerships. We now have over 75 active projects, launched in partnership with research agencies in the U.S., the EU, Japan, Australia and Taiwan. (See Cloud Seeding: Stimulating Discovery and Innovation.) The partnerships are an opportunity for us to work together to understand how clouds can best support technical computing and discovery, driving change in the cloud infrastructure space and service providers and also educating researchers and HPC providers about this technical and cultural transition.
It's always exciting to be a part of these technical collaborations.
Comments
You can follow this conversation by subscribing to the comment feed for this post.