SC19, the annual gathering of the advanced computing cognoscenti, many sporting shirts festooned with corporate and institutional logos and geek jokes, is now underway in Denver, Colorado. After arriving from the outskirts of western Nebraska and Kansas (i.e., Denver International Airport), high-performance computing fans are descending on the Colorado Convention Center in downtown Denver, eager to discuss new server design, interconnect advances, and software, while surfing the artificial intelligence tsunami that now consumes all things computing, whether exascale or edge, bare metal or containerized.
I will soon be at SC19 myself, visiting with all my geeky friends, after being at the National Science Board and having dinner with the new PCAST members, but this essay is about none of that. Rather, I want to highlight two recent reports, both of which outline the need for a rethinking of our approaches. Put another way, Feynman’s 1959 “go small” clarion call in There’s Plenty of Room at the Bottom has been answered, and there’s not much more room at the inn, for CMOS FinFETs are down to small numbers of atoms.
Concurrently, we are encountering fabrication and economic challenges below 7 nanometers, though there is still progress to be made. As I told Nature in 2016, I think we will run out of money before we run out of physics. (See The Chips Are Down for Moore’s Law.) Regardless, innovation will continue, albeit along other axes. (See Nothing Lasts Forever for musings on how aircraft innovation has morphed from a focus on speed.)
Last week, the U.S. Department of Energy released the Basic Research Needs for Microelectronics report, which many of us have been editing and writing for the past year. I was pleased to co-chair the effort with Cherry Murray (Harvard/Arizona) and Supratik Guha (Argonne National Laboratory). The report, a collaboration among three Department of Energy offices – Advanced Scientific Computing Research (ASCR), Basic Energy Sciences (BES), and High Energy Physics (HEP) – highlights five critical needs:
- Flip the current paradigm by defining innovative material, device, and architecture requirements, driven by applications, algorithms, and software. We must break down the hierarchical abstractions that have served us so well and engage in holistic, end-to-end design, where, for example, material properties can shape design of an algorithm. This requires, of course, a new model of education and collaborative design, of which the following four items are specific instances.
- Revolutionize memory and data storage. Our current memory hierarchies are too energy inefficient and too slow, and their capacities are too long to meet future needs. We need some new materials science and physics insights, coupled with innovative design.
- Reimagine information flow unconstrained by interconnects. Current intrachip and interchip energy costs and data transfer rates, coupled with von Neumann architectures limit design options.
- Redefine computing by leveraging unexploited physical phenomena. Seek new materials, beyond just CMOS evolution.
- Reinvent the electricity grid via new materials, devices, and architectures. This is but one of many application challenges, driven by the rise of renewable energy and the need to build a next-generation, highly resilient grid.
Last week, the Fast Track Action Committee on Strategic Computing, part of the U.S. National Science and Technology Council (NSTC), released the National Strategic Computing Initiative Update: Pioneering the Future of Computing. The report’s opening sentence echoes many of the points in the Department of Energy study, noting,
The national computing landscape is undergoing rapid evolution along multiple dimensions due to the introduction of new and potentially disruptive technologies and the demands of new classes of data intensive applications. Computer architectures and systems are more heterogeneous and complex, and the challenges associated with the complexity and sustainability of software are significant.
The NSTC strategic computing report recommendations include
- Embrace a diversity of hardware and software approaches for the future of computing and leverage the innovation ecosystem
- Encourage novel solutions that leverage in-network and edge-processing capabilities to process data close to the source as part of end-to-end application workflows
- Support the rapid translation to practice of basic R&D and technology to address scientific challenges requiring the effective integration of advanced software and hardware
There’s trouble in River City. The free ride from Dennard scaling is over; the von Neumann architecture is showing its age; and the hierarchical abstractions we have so cherished are now an intellectual box from which we must escape.
It’s time for ab initio, integrative, transdisciplinary thinking, as I discussed in Renaissance Teams: Reifying the School at Athens.
Comments
You can follow this conversation by subscribing to the comment feed for this post.