Today (May 22, 2013), I testified to the House Space, Science and Technology Committee's Subcommittee on Energy at a hearing on America's Next Generation Supercomputer: The Exascale Challenge. I was pleased to be a witness, along with my old friends and collaborators, Dona Crawford (Lawrence Livermore National Laboratory), Roscoe Giles (Boston University) and Rick Stevens (Argonne National Laboratory).
You can read the hearing charter and my extended, written testimony on the hearing web site and watch an archived video of the hearing. In my written and oral testimony, I made four points, along with a specific set of recommendations. Many of these points and recommendations are echoes of my previous testimony, along with recommendations from many previous high-performance computing studies.
These recommendations include several studies I have chaired. These range from the President's IT Advisory Committee (PITAC) review of computational science and the President's Council of Advisors on Science and Technology (PCAST) review of the U.S. Networking and Information Technology Research and Development (NITRD) program through the High-end Computing Revitalization Taskforce (HECRTF) workshop and a recent National Academies Board on Global Science and Technology (BGST) study of computing futures.
What that backdrop, here is what I said during the hearing.
Oral Testimony
First, high-performance computing (HPC) is unique among scientific instruments, distinguished by its universality as an intellectual amplifier.
New, more powerful supercomputers and computational models yield insights across all scientific and engineering disciplines. Advanced computing is also essential to analyzing the torrent of experimental data produced by scientific instruments and sensors. However, it is about more than just science. With advanced computing, real-time data fusion and powerful numerical models, we have the potential to predict the tracks of devastating tornadoes such as the recent one Oklahoma, saving lives and ensuring our children's futures.
Second, the future of U.S. computing and HPC leadership is uncertain.
Today, HPC systems from DOE's Oak Ridge, Lawrence Livermore and Argonne National Laboratories occupy the first, second and fourth places on the list of the world's fastest computers. One might surmise that all is well. Yet U.S. leadership in both deployed HPC capability and in the technologies needed to create future HPC systems is under challenge.
Other nations are investing strategically in HPC to advance national priorities. The U.S. research community has repeatedly warned of the eroding U.S. leadership in computing and HPC and the need for sustained, strategic investment. I have chaired many of those studies as a member of PITAC, PCAST, and National Academies boards. Yet these warnings have largely been unheeded.
Third, there is a deep interdependence among basic research, a vibrant U.S. computing industry and HPC capability.
It has long been axiomatic that the U.S. is the world's leader in computing and HPC. However, global leadership is not a U.S. birthright. As Andrew Grove, the former CEO of Intel, noted in his famous aphorism, "only the paranoid survive." U.S. leadership has been repeatedly earned and hard fought, based on a continued Federal government commitment to basic research, translation of research into technological innovations, and the creation of new products.
Fourth, computing is in deep transition to a new era, with profound implications for the future of U.S. industry and HPC.
U.S. consumers and businesses are an increasingly small minority of the global market for mobile devices and cloud services. We live in a "post-PC" world where U.S. companies compete in a global device ecosystem. Unless we are vigilant, these economic and technical changes could further shift the center of enabling technology R&D away from the U.S.
Recommendations for the Future
First, and most importantly, we must change our model for HPC research and deployment if the U.S. is to sustain its leadership. This must include much deeper and sustained interagency collaborations, defined by a regularly updated strategic R&D plan and associated verifiable metrics, and commensurate budget allocations and accountability to realize the plan's goals. DOE, NSF, DOD, NIST and NIH must be active and engaged partners in complementary roles, along with long-term industry engagement.
Second, advanced HPC system deployments are crucial, but the computing R&D journey is more important than any single system deployment by a pre-determined date. A vibrant U.S. ecosystem of talented and trained people and technical innovation is the true lifeblood of sustainable exascale computing.
Finally, we must embrace balanced, "dual use" technology R&D, supporting both HPC and ensuring the competitiveness of the U.S. computing industry. Neither HPC nor big data R&D can sacrificed to advance the other, nor can hardware R&D dominate investments in algorithms, software and applications.
Comments
You can follow this conversation by subscribing to the comment feed for this post.