Those of us in computing and especially those of us in high-performance computing (HPC) have been received Pavlovian conditioning to believe that computers will always be faster if we just wait eighteen to twenty-four months. (Excuse me while I wipe the saliva off my chin, triggered by thinking about this.) It seems this trend has ever been so and ipso facto, so shall it ever be.
Unfortunately, no exponential continues forever, at least outside the mathematics textbooks. All technology-based advances are ultimately limited by something, usually some physical or economic limit. Innovation then shifts to other metrics, against which advances can be measured and valued. Computing is no exception.
Leavin' On A Jet Plane
In the heady days of aviation, when world speed records were set every year, it must have seemed that the sky was literally the limit on achievable airspeeds. And it was, for supersonic flight proved impractical given the societal response to sonic booms, the fuel inefficiency of high-speed transports and the limiting characteristics of materials at high temperatures. The SR-71 may well have been the apogee of hypersonic flight, Aurora rumors not withstanding.
Once those practical limits on airspeed were reached, aerospace innovation did not stop, it simply changed direction. The Boeing 747 and Airbus A380 fly at subsonic speeds, just as the Boeing 707 and 727, but they carry far more passengers and have longer range. Similarly, improvements in jet engines dramatically increased fuel efficiency and reduced maintainence costs. In turn, glass cockpits and electronics improved safety.
Perhaps most of all, changing expectations and airline deregulation democratized air travel, making it economically accessible to almost all, though with all the ambience one associates with riding a prison bus on a hot summer day. (Parenthetically, there are days I regret the loss of amenities as board yet another airplane. I still remember hot meals on flights, and I mourn the loss of legroom.)
The Need for Speed
There is no reason to believe that computing is exempt from the cruel realities of bounded exponentials. The difficulties of continued Dennard scaling and the rise of dark silicon are both challenging our ability to deliver higher performance single and multicore chips. (See Battling Evil: Dark Silicon) Instead, there is an increasing emphasis on function-specific accelerators and devices, hardware-software co-design and lower power designs for embedded use.
In the HPC world, the performance of the systems on the Top500 list has been rising faster than single processor performance, in large measure due to increasing system sizes. This is not, in the long term, a sustainable trend. Similarly, design studies for exascale systems suggest the need for dramatic improvements in energy efficiency (operations/joule), system reliability and programmability.
These may – or may not – be achievable at economically viable price points. The risk of brute force solutions and "research by acquisition" is that we build a Concorde SST, an economic dead end. Instead, we need to conduct the basic research and explore the new ideas in high-performance computing that might solve some of our fundamental problems. After all, studies of hypersonic scramjets continue.
Lest I be called a naysayer or predictor of doom, let me note that there are plenty of other opportunities for innovation in computing in general and in high-performance computing in particular. In addition to pursuing scramjet designs, we need the HPC equivalent of the Boeing 737 – an everyday workhorse that is reliable, efficient, easy to operate and ubiquitous. We must address the infamous excluded middle, making HPC accessible for everyday use by businesses, researchers and governments. There's plenty of room for innovation in algorithms, software tools, programming models, resource management, and energy efficient design.
Remember, no exponential is forever.
Recent Comments