As a boy, I remember reading all the superhero comic books, from the standard Superman and Batman to the Green Lantern, Thor, The Flash and Spider-Man. Some superhero stories and backstories were better than others, but all were escapism from small town life. For me, Thor was always a favorite, though I am not quite sure why. Maybe I just liked the head gear! The summer "action movie" season is upon us, and several of these superheros will grace the screen in new Hollywood fantasies, battling the forces of evil.
All of this made me think about the resident evil now facing us in computing – the rise of dark silicon. The very phrase sounds ominious and it is, for I believe it will profoundly reshape how we think about computing in the next decade. But before we talk about the grand denouement where our Justice League of silicon process experts, computer architects and software gurus do battle with dark silicon, perhaps we should begin with a bit of backstory.
Moore's Law and Silicon Scaling
We are all familiar with Moore's Law, namely that the number of transistors in a given area doubles roughly every two years. It is worth noting that Gordon Moore never really claimed this was a law, merely that it was a statistical truth based on observational data and technology trends. It is also interesting how a rather informal 1965 Electronics paper, entitled "Cramming More Components onto Integrated Circuits," had such impact. Over the past thirty years, the "law" has held true due to the hard work and creativity of a great many people, as well as many billions of dollars of investment in process technology and silicon foundries.
For a detailed understanding of why reducing transistor feature sizes really has been possible, I highly recommend reading the original, seminal paper by Robert Dennard, Fritz Gaensslen, Hwa-Nien Yu, V. Leo Rideout, Ernest Bassous and Andre LeBlanc, entitled "Design of ion-implanted MOSFET's with Very Small Physical Dimensions." It appeared in the October 1974 issue of the IEEE Journal of Solid-State Circuits, and it defined a scaling process that has driven the semiconductor industry ever since.
Simply put, Dennard scaling says that reducing the length, width and gate oxide thickness of transistor features by a constant k results in transistors that are k2 times smaller, k times faster, and dissipate k2 less power. This was Moore's Law made manifest and has let us move from feature sizes measured in micrometers to nanometers. Since the elucidation of Dennard scaling, all has been well in Metropolis, at least until very recently.
Power and Performance: Evil Lurks
As transistor sizes continued to shrink and clock rates continued to increase, we have seen the rise of high-k dielectrics and silicon on insulator (SOI) to address static current leakage. In that spirit, one of my colleagues recently remarked that transistors are now less like solid doors that keep out the cold (current flow) unless open and more like screen doors that always have some air flow (current leakage) even when closed.
Transistor density, clock rates and chip sizes have also made heat dissipation an ever bigger problem. While at Intel, Pat Gelsinger once remarked that unless things changed, chip temperatures would approach that of the sun's surface, an obvious absurdity only possible in the comic books. All of this has led to multicore designs, bounds on clock rates for most microprocessors and decreases in operating voltages.
We have also seen new semiconductor processes and transistor design options to reduce leakage current. (Intel's recent FinFET announcement is one such example.) Although I expect to see transistor feature sizes continue to shrink, with sub-11 nanometer feature sizes achievable, it will not be easy, nor will it be cheap. Lest there be any doubt, the cheap part really matters, for it is the economics that will determine if scaling continues.
These semiconductor challenges have been mirrored by an increasingly empty bag of general purpose architectural tricks. Almost all of the techniques once found in supercomputer processor designs – superpipelining, scoreboarding, and vectorization – are now in mainstream microprocessors. It is not obvious that we can extract more parallelism while maintaining the illusion of sequential execution. In fact, the evidence is substantial that we cannot.
Battling Evil: Dark Silicon
That's the backstory. Now let's turn to the true evil lurking among us, dark silicon. If one considers process scaling trends and limits on heat dissipation, the conclusion is rather obvious. We soon will have (and in many cases already do have) chips with more transistors than can be concurrently activated. The practical implication is that most of the chip silicon will be "dark" (unpowered) most of the time.
If it's dark most of the time, is it of any value? Yes, but only if we embrace our fears. This is the true end of the hegemony of von Neumann computing and the birth of a new era, one based on chips with a diverse set of specialized functional units, accelerators and tailored architectures that each target specific tasks. The mobile world has already learned this lesson, for functional specialization increases energy efficiency (more operations/joule) and allows designers to mix and match the set of functional units to target domains. Arguably, heterogeneous multicore is one, very tentative step in this direction for client and server computing.
This transition to diverse functional units, accelerators and tailored architectures will also challenge many of our assumptions about programming models and system software, as well as the economics of chip design. In this new world, hardware/software co-design becomes de rigeur, with devices and software systems deeply interdependent.
It is a different ecosystem, with different cultures and different processes. It means far less general-purpose performance increases, much more diversity in the hardware, elevation of multivariate optimization – power, performance, reliability – in programming models, new system software resource management challenges and a plethora of new applications.
More generally, with lots of silicon real estate (dark silicon), chip designers and architects have the space to place functional blocks that are used only part of the time. However, at some point (and we are arguably close to that point now), we will have deployed those functional units that cover the common tasks and adding more will have possibly deleterious effects due to larger chip size. That potentially drives the economics to smaller chip sizes, which, given foundry capacity, leads to lots of small chips. That is the collision of the big chip server world with the small chip mobile and Internet of Things world, a battle already raging.
To address these challenges and battle dark silicon, we need new ideas in computer architecture, system software, programming models and end-to-end user experiences. It's an epic struggle for the future of computing. Giants are battling, and not all will survive, nor will the survivors emerge unscathed. Heroes will also emerge, sometimes from unexpected places.
I think I left my hammer, Mjölnir, around here somewhere. Do you feel your spider sense tingling?