Since I joined Microsoft in late 2007, I have written about science policy, Federal government interactions, and national competitiveness studies, in my role as a member of PCAST and chair of the Computing Research Association (CRA). Throughout, I have emphasized the need for strategic investment in long-term, basic research, especially as part of the economic stimulus package..
I have also discussed the rise of multicore computing, the consequent software crisis and the need for innovation in both architecture and software, including Microsoft's support for the Microsoft/Intel-funded Universal Parallel Computing Research Centers (UPCRCs) at Illinois and UC-Berkeley. I have also mused on the future of high-performance computing and its role as an enabler of scientific discovery. I have even written about my family, my rural childhood and my life experiences.
What I have not done is write about why I came to Microsoft and what I am doing – until now. Yes, my team manages the UPCRCs in partnership with Intel. Yes, I devote time and energy to research policy, both for the community and on behalf of Microsoft. Yes, I am involved in the future of high-performance computing, both politically and technically. However, that's not the entire story.
It's time to talk infrastructure so large it makes petascale systems seem small. It's time to talk about why I can't remember the last time I had this much fun. It's time to pull back the curtain and talk about the future of clouds. No, I'm not talking about weather forecasting, though I really enjoyed my past collaboration with the LEAD partnership.
I came to Microsoft to lead a new research initiative in cloud computing, one that complements our production data center infrastructure and our nascent Azure cloud software platform. You can read the press release and the web site for the official story. What follows is my personal perspective.
The Infrastructure of Our Lives
We all know the cloud premise – Internet delivery of software and services to distributed clients, from mobile devices to desktops. We tend not to think about how dependent we now are on those delivered services, though we are, just as we depend on the telephone and our water and electrical utilities.
Imagine a day without the web, without search engines, without social networks, without online games, without electronic commerce, without streaming audio and video. Our world has changed, and government, business, education, recreation and social interaction are now critically dependent on reliable Internet services and the hardware and software infrastructure behind them. However, more research and technology evaluation are needed to make them as trustworthy as the telephone network.
Building Internet services infrastructure using standard, off-the-shelf technology made sense during the 1990s Internet boom. (And yes, I remember how cool Mosaic was, when I first saw it at Illinois.) The facilities were small by today's standards, and the infrastructure could be deployed quickly. Today, however, the scale is vastly larger, our social and economic dependence is much greater and the consequences of failure are profound. Web service outages are now international news, and a cyberattack is considered an act of war.
For background on some of the challenges and problems in scaling, you might want to follow the Data Center Knowledge and High Scalability web sites. If you are new to this space, they and other reading will redefine your notions of large and reliable. You might not think 100 megawatts could be a data center design constraint, but it is. More importantly, you should fear – yea, verily, be absolutely terrified by –the wrath of 100 million unhappy customers should your Internet service fail. Every nightmare that has ever awakened a CIO in a cold sweat at 2am is real, but magnified a thousand fold. If it were easy, though, it would neither be exciting nor fun.
Cloud Infrastructure Challenges
Microsoft's business, like that of other cloud service providers -- Amazon, Google, Yahoo and others – depends on an ever-expanding network of massive data centers: hundreds of thousands of servers, many, many petabytes of data, hundreds of megawatts of power, and billions of dollars in capital and operational expenses. This enormous scale – far larger than even the largest high-performance computing facilities – brings new design, deployment and management challenges, including energy efficiency, rapid deployment, resilience, geo-distribution, composability, and graceful recovery.
I have been a "big iron" guy for a long time, and Internet and cloud services infrastructures do have analogs with petascale and exascale computing, but the workloads and optimization axes are different. Like today's HPC systems, cloud computing facilities are being built with hardware and software technologies not originally designed for deployment at such massive scale. Consequently, they are less efficient and less flexible than they either can or should be. If we built utility power plants the same way we build cloud infrastructure, we would start by visiting The Home Depot and buying millions of gasoline-powered generators. This must change.
Imagine a world where heterogeneous multicore processors are design and optimized for diverse workloads, where solid state storage changes our historical notions of latency and bandwidth, where on-chip optics, system interconnects and LAN/WAN networking simplify data movement, where scalable systems are resilient to component failures, where programming abstractions facilitate functional dispersion across devices and facilities, where new applications are developed more quickly and efficiently. This can be.
Cloud Computing Futures
Over the past fourteen months, I have been quietly building the Cloud Computing Futures (CCF) team, starting with a key concept. We must treat cloud service infrastructure as an integrated system—a holistic entity—and optimize all aspects of hardware and software. I have recruited hardware and software researchers, software developers and industry partners to pursue this vision. It's been a blast.
The CCF agenda spans next-generation storage devices and memories, new processors and processor architectures, networks, system packaging, programming models and software tools. We are a research and technology transfer team, whose roles are to explore radical new alternatives – "blank sheet of paper" approaches to cloud hardware and software infrastructure – and to drive those ideas into implementation and practice.
Effective research in this space requires changes to both hardware and software, and the resulting prototypes must be constructed and tested at a scale difficult for small teams. This type of research and technology transfer is in academia, because the efforts often cross many research disciplines.
For this reason, the CCF team is taking an integrated approach, drawing insights and lessons from Microsoft's production services and data center operations, and partnering with researchers, vendors and product teams worldwide. Our work builds on technical partnerships and collaborations across Microsoft, including Microsoft Research, Debra Chrapaty's Global Foundation Services (GFS) data center construction, operations and delivery team, and Ray Ozzie's Azure cloud services group. We are also partnering with an array of hardware-technology providers and companies as we build prototypes.
Now You Know
For me, CCF has been an opportunity to apply research experiences and ideas gleaned over the past twenty-five years of my academic career. Equally importantly, it is a chance to build prototypes at scale to test those ideas, and then help drive the promising technologies into practice. The past year has been great fun, and I have been privileged to attract and partner with some wonderful people to this adventure, including Jim Larus and Dennis Gannon.
Now you know why I came to Microsoft. It was a chance to practice what I've been preaching. It was a chance to help design the biggest of big iron. It was a chance to help invent the future. It's a pretty cool gig for a balding old geezer like me!
Comments
You can follow this conversation by subscribing to the comment feed for this post.