It's not a Fibonacci series and it's not a prime number series. It might be the second three denominations of U.S paper currency, but it's not. (Yes, Virginia, there is a U.S. $2 bill.) Instead, it's a measure of the potential impact and uptake (or lack thereof) of new technologies. To see why, let's move from the abstract to the concrete, considering when new technologies gain credence.
Closing the Deal
Suppose you are presented with a new hardware or software model, one that requires migrating your extant code base to exploit the new model. The migration effort will be substantial, but you are promised a multiplicative increase in one or more metrics – performance, reliability, scalability, maintainability, or usability. What multiplicative factor will provide the benefit to justify the cost?
Will a factor of ten? Yes, history has shown repeatedly that an order of magnitude change in critical metrics offers competitive advantages that offset costs.
Will a factor of five? Perhaps it will, but there are no guarantees.
Will a factor of two? Certainly not, for the costs are real and the benefits are modest. Incremental advances of existing technology are likely to yield comparable benefits and often more quickly. Those who risk change for small factors are often left behind.
Remember the Cray-1
We tend to forget the real reason the Cray-1 was so successful. It was not the innovative vector architecture and memory system, nor was it the plethora of great software. There was so little software on the Cray-1 that Los Alamos launched a project to build an operating system, DEIMOS, for it. Rather, it was that the Cray-1 was by far the fastest scalar processor in the world when it was introduced. Existing scalar code ran much faster than on earlier machines, even without vectorization, creating a powerful impetus to move to the new machine and vectorize the code incrementally.
It Will Scale Linearly
Years ago, I was sitting in the back of a conference room with John Hennessy (now the President of Stanford University), listening to a young researcher talk about their latest parallel software technology. I must confess that John and I weren't paying much attention, until the researcher said something that immediately changed our intellectual focus. In passing, the researcher said, "There are no reasons this technique shouldn't scale linearly with system size." John and I immediately looked at one another, incredulous at what we had just heard. I no longer remember which of us took exception to the comment, but we did.
Rarely do computing system architectures scale by orders of magnitude – in either direction – without exposing either performance bottlenecks or non-scalable design assumptions. Things that consume negligible time at one scale suddenly dominate performance at another, or software structures become unwieldy.
Lessons Learned
Despite our love of self-similarity, orders of magnitude really matter. It's worth remembering this as we contemplate future exascale computing system designs and digest the recent DARPA exascale software study. Lessons from the petascale era may or may not translate directly to exascale designs. Two, five ten – Caveat emptor!
Comments
You can follow this conversation by subscribing to the comment feed for this post.