When I first joined the computer science faculty at Illinois, Dan Slotnick was a senior faculty member. At the time, Dan was recovering from the ILLIAC IV project. I used the word "recovering" to imply that ILLIAC IV had been a huge, complex and highly controversial project to build the world's fastest supercomputer. It is a story of great plans, some risky choices and very complex politics during the height of the Vietnam War. Students protesting U.S. Department of Defense funding for ILLIAC IV was just one of the challenges.
From the outset, ILLIAC IV was an ambitious and audacious project that pushed the boundaries of almost all aspects of computer system design. ILLIAC IV was based on a new architectural approach – large-scale parallelism – that is common today, but which was largely untried in the 1960s. It included new data parallel programming languages (Glypnr – Algol like and IVTRAN – parallel FORTRAN), parallel disks and thin film (solid state) memory at a time when magnetic cores were the standard primary memory technology. The machine was also one of the first to be designed using electronic CAD.
That's me posing with parts of the ILLIAC IV at the Computer History Museum. The platter behind me is from one of the 80 MB disk drives built by Burroughs and used in the ILLIAC IV.
Experience as a Teacher
As an enterprising young faculty member, I asked Professor Slotnick if he had any words of wisdom to offer a young parallel computing researcher, based on his experience with ILLIAC IV. He simply said, "Choose your risks carefully." At about the same time, in writing about the lessons of ILLIAC IV, he noted
By sacrificing a factor of roughly three in circuit speed, it's possible that we could have built a more reliable multi-quadrant system in less time, for no more money, and with a comparable overall performance. The same concern for the last drop of performance hurt us as well in the secondary (parallel disk) and tertiary (laser) stores.
In the intervening twenty-five years, I have come to call this Slotnick's Law, which I now phrase as "Gamble on only one thing at a time, nobody is that lucky." In less colloquial terms, it means that when building research prototypes, one should take risks on only a small number of things if one expects to learn whether the change really does make a difference and if one wants to have any reasonable prospect of creating a working prototype. It does not mean one should not have grand ambition, simply that grand success is usually a multi-step process.
Lessons for Today
Today, several bedrock assumptions in modern computing are crumbling. Individual processor cores no longer double in speed every two years. Netbook PCs are as inexpensive as smartphones. A single, massive cloud data center or a petascale high-performance computing system contains more computers than the entire Internet did just a few years ago. Scientists and gamers now use the same computing hardware and tools. The personal petabyte will soon be but a local cache of the global, exascale knowledge store. In the midst of all this, the economy is flatter than a pancake, and investment nickels are tossed about as if they were manhole covers.
As we explore this brave new world of computing, whether the future of consumer devices or massive data centers and exascale HPC platforms, we need to choose our research prototypes wisely and well. That means choosing the key things whose exploration will maximally illuminate the future. Remember Slotnick's Law.
Comments
You can follow this conversation by subscribing to the comment feed for this post.