Today, Microsoft and Intel jointly announced the creation of two Universal Parallel Computing Research Centers (UPCRC): one at the University of California at Berkeley (UC-Berkeley) and a second at the University of Illinois at Urbana-Champaign (UIUC). The two university press releases (Illinois and UC-Berkeley) are also posted. As Tony Hey said during the announcement, "Driven by the unprecedented capability of multicore processors, we're in the midst of a revolution in the computing industry, which profoundly affects the way we develop software."
Serious Industrial Funding
Illinois and Berkeley were selected after an evaluation of twenty-five major research institutions, based on their long history of parallel computing research and leadership. The UC-Berkeley center will be led by Dave Patterson, and the Illinois center will be led jointly by Marc Snir and Wen-mei Hwu. Each center will involve a host of faculty, post-doctoral associates and graduate students.
Equally importantly, these are not the typical small industrial research awards -- Microsoft and Intel have committed a combined $20 million to the two research centers over the next 5 years. An additional $8 million will come from UIUC, and UC Berkeley has applied for $7 million in funds from a state-supported program to match the awards.
Multicore: The Revolution Is Here
Microsoft's Chief Research Strategy Officer, Craig Mundie, has evangelized at length about how transforming multicore computing will be. It is both about moving parallel computing into the mainstream, something those of us in parallel computing have been working toward for decades, and the enormous opportunities to deliver new applications that require parallelism.
New applications that provide location-aware, intelligent responses, coupled sensors and mobile devices, vision and speech processing, coupled mobile and backend devices via computing clouds and a host of other new possibilities are at hand. However, these opportunities also bring challenges – easing the parallel programming burden and simplifying parallel software development.
Hence, research at the UC-Berkeley and Illinois centers will focus on advancing parallel programming applications, architecture and operating systems software. In short, these are the broad range of issues central to moving parallel computing into the mainstream and creating a new generation of applications. As the press release notes,
Parallel computing has become essential to enhancing program performance and satisfying the increased demands for power efficiency and small form. The challenge ahead for the technology industry is bringing the benefits of multicore processing based on tens or hundreds of cores to mainstream developers and, eventually, consumers.
The ultimate goal is to make parallel computing easier for developers by providing solutions to new platform architectures, operating system architectures, programming methods and tools, and application models. The changes needed affect the entire industry, from consumers to hardware manufacturers and from the entire software development infrastructure to application developers who rely upon it.
The bottom line (quite literally) is that Microsoft and Intel are investing large sums internally and externally to accelerate development of programming models, architectures and next-generation applications. This is no small task, and we need to work together collectively (academia, industry and government) if we are to make progress on a problem that has bedeviled us for many years.
Parallel Programming Challenges
Is there a silver bullet that will dramatically simplify parallel programming? Based on all of our experiences, that seems extremely unlikely. Rather, I believe we will see a collection of techniques emerge that -- over time -- simplify our approaches and raise the level of abstraction we use to specify parallelism. I also think there are opportunities to exploit inexpensive hardware for truly deep optimization.
Finally, we may well have to change our definition of performance to include more directly and explicitly the human cost of parallel software development. The DARPA High Productivity Computer Systems project was intended to pursue that goal. To put this notion of productivity in perspective, if we still developed web applications using only TCP/IP (i.e., we wrote only low-level protocol code), we would not have the rich environment we do today. Tellingly, we still write parallel codes that way using MPI. There is, perhaps, a moral there. This is something I've commented about multiple times in the past (see Multicore: Our Software Crisis and Petascale and Multicore Redux).
The Academic-Industry Partnership
As a new member of industry and a member of PCAST, I am also pleased that this industry-academic partnership is underway. PCAST has long encouraged more industry participation in research. This is an outstanding example of such a partnership.
Small Worlds
In the small world category, Andrew Chien and I, former academic colleagues, will be managing the two academic parallel computing centers for Intel and Microsoft, respectively. I look forward to working with Andrew and with my old friends and former academic collaborators at Illinois and UC-Berkeley.
I am excited about multi core and many core platforms. However I have been involved in parallel computing since the mid 1980s and have witnessed the search for the silver bullet that will make parallel computing "easy". In those days it was an automatic parallelizing compiler for distributed memory parallel systems. Expressing my pessimism at the prospect for such a thing at a conference lunch in mixed company, I was set upon by an angry attendee who said, "I suppose you enjoy writing MPI". Well, here we are about 20 years later and software for new parallel computer systems still includes MPI and, to the best of my knowledge, no automatic parallelizing compiler.
There is a fundamental difference in the situation now however. The platform is different and we have 20 years of experience that we, or I should say I, didnt have then. (Other people have more experience) Having said that, I am optomistic that the new partnerships at Illinois and Berkley will produce new tools, libraries, and ways to think about parallel processing.
I do, by the way, still occasionally write the odd bit of MPI code. Now where did I put that CUDA users guide...?
Posted by: Dave Semeraro | March 18, 2008 at 12:42 PM