N.B. I recently responded to some questions from John West (HPCWire) regarding the Microsoft Cloud Computing Futures (CCF) research project. In that Q&A, I also commented on the relevance of cloud computing to computational science. What follows is an augmented subset of the Q&A, but focused on just the relevance of clouds to technical computing.
Cirrus, stratus, altostratus, cumulus: they are the scientific names of the common clouds. They drift across the sky, reflecting the changing wind and weather. A new front is blowing into computational science, and cloud computing will soon advance scientific and engineering discovery.
That is one of the reasons I am excited about cloud services. I believe we are at a technological transition point, just as profound as that engendered by the "attack of the killer micros." This is true whether you are enamored of Microsoft's Azure, Amazon's AWS or Google's Apps.
Learning from History
Let's step back and gain some perspective, starting with the "Branscomb pyramid" ("From Desktop to TeraFlop: Exploiting the U.S. Lead in High Performance Computing, Lewis Branscomb et al) and the diverse types of technical computing that now exist. We tend to focus on the apex of the computing pyramid, now exemplified by petascale systems intended to support only a handful of applications and users. However, most science is conducted at lower levels of the pyramid, using desktop computers, laboratory clusters and university-scale computing infrastructure. By analogy, it's exciting to talk about international hypersonic transport, but most people care more about efficiently and painlessly commuting to work each day.
Over the past decade, we successfully leveraged commodity hardware to create large clusters. What was nearly heretical when we first deployed clusters at NCSA is now commonplace. However, this scaling has not been without cost. Cluster programming remains difficult at scale, we have turned a generation of researchers into parallel programmers and system administrators, institutions are struggling with rising demands for machine space, power and cooling, and duplicated facilities make sharing expertise and data difficult. We are heavily focused on computing at a time when data analysis now dominates much of science and engineering. Like many of you, I contributed to this state of affairs, and I feel some responsibility to help us find a new path.
Hype and Reality
Let's separate the hype from the reality. Clouds won't magically restore your 401(k) retirement fund, cure halitosis or even help you drop twenty pounds before your upcoming high school reunion. Like all new technologies, however, they challenge some conventional computing wisdom and change some of our operating assumptions.
Personal computing was a non sequitur when computers filled rooms. Internet search was nonsensical when there were only a handful of research web sites. Social networking services depend on inexpensive, ubiquitous broadband access and mobile devices. Hosted cloud services and software are now possible given the confluence of inexpensive but powerful multicore processors, high-capacity storage, broadband networks and the economies of scale that consolidation in cloud data centers make possible.
Five Reasons Clouds Matter
First, the economies of scale from mega-data center provisioning mean capital and operating costs can be lower. When buying servers in 500,000 unit lots and designing facilities at scale, the provider does have some financial and technical leverage. This would allow universities, laboratories and federal agencies to devote a larger fraction of precious funding to research rather than infrastructure. Remember Dan's computational science corollary; it's the science, not the infrastructure, which matters.
Second, truly large-scale data analysis, particularly multidisciplinary data fusion, can become routine. In the scientific community, we have worked hard to build workflows for access to distributed data. Consolidation and co-location enable new approaches, and we tend to forget that cloud data centers have many, many petabytes of disk storage. It really is possible to query multiple petabytes of data using intuitive, easy-to-use desktop tools – the business community does it all the time. Jim Gray proved the power of database tools on several scientific data analysis projects, including the SkyServer.
Third, clouds facilitate time-space tradeoffs. It is just as cost-effective to run 100,000 individual jobs simultaneously as sequentially (e.g., for a parameter study), something that our batch queuing strategies strongly discourage on high-performance computing systems. In geek terms, the area is the same, whether one uses tall, skinny rectangles (lots of resources for a small interval) or short, long rectangles (a few resources for a long interval). The elasticity of clouds, a consequence of multiplexing many users and workloads, means that the resources are always available without waiting.
Fourth, I also believe that the cloud will offer HPC services at increasing scale, beginning with that typified by today's laboratory clusters. This is already happening, and as I/O device virtualization continues to improve, communication latencies will decrease and tightly coupled computations will be attractive at ever larger scale.
Finally, clouds can provide seamless extension of familiar desktop tools and interfaces, allowing computing and analysis to scale within the same environment that researchers use every day. We can leverage consumer software, just as we have leveraged consumer hardware. There is no reason our computational science tools and our "every day" tools need be different.
Shameless Microsoft Plug
To this point, I've written about clouds in a vendor-neutral sense. With a nod to the company name now on my paycheck, if you haven't already, I encourage you to take a look at Windows Azure and its cloud computing and storage services. In addition to rich web services, there is both open source and Visual Studio programming support. In a future post, I will describe an Azure example application for computational science. Here endeth the marketing pitch.
Insight, Not Infrastructure
As Richard Hamming famously noted, "The purpose of computing is insight, not numbers." Dan's computational science corollary is simple, "The purpose of computational science infrastructure is scientific discovery, not big iron bragging rights." It's time to focus on what matters and embrace the future. Our graduate students and post-docs will thank us.
Comments
You can follow this conversation by subscribing to the comment feed for this post.