Perhaps the closest I have approached the feeling of intellectual omnipotence (and it has not been often) was when I was a graduate student, teaching undergraduate FORTRAN programming for science and engineering students. Those were the days of punched cards, batch processing and line printer output. It was a time when dinosaurs roamed the earth, my slide rule was still a fond memory, and we were computing on massive machines with less performance than my smartphone has today. (Ignore that part about the dinosaurs, as it was a wee bit earlier.)
During my office hours, students would troop through for help with their programming assignments, bringing program printouts. These printouts began with a "day file" showing the timeline of compilation and execution steps (JCL steps on IBM S/360), followed by the program and then any output. After having seen a few students, you realize that elementary programming errors have just a few equivalence classes, often identifiable from the day file alone. I took great pride in listening to the student description of the problem, predicting the root cause and then immediately locating it on the printout. The student was inevitably amazed. Yes, it was a cheap parlor trick, but as a graduate student, you take gratification where you can find it.
All of this reminds me that sequential programming remains difficult and parallel programming even more difficult, despite all our efforts to tame the unruly beast with syntactic sugar, specification and scripting languages, libraries and tools. I am no expert in parallel programming language design, nor am I a cognitive psychologist, but like many of my readers, I have pulled a few all-nighters writing code. Perhaps that at least qualifies me to have an opinion.
In a world of parallel programming, multicore architectures, dark silicon, and rising software complexity, we face some difficult choices. It's time to learn from the past.
Fading Abstractions
We have been fortunate to date. Although sequential programming at scale is intellectually challenging, we have been riding the wave of ever-increasing processor performance, made possible by semiconductor advances. In turn, these were based on an abstraction – the Mead-Conway design rules –that largely separated device physics from microprocessor architecture. Finally, the von Neumann abstraction has also served us well, allowing decades of software development atop an easily understood model of sequential instruction execution.
Our run of good luck is over. Put another way, we are surrounded by opportunities! Parallelism is now the norm for almost all software developers, not just those of us in high-performance computing. More perniciously, heterogeneous parallelism is here, with disparate cores, accelerators and functional units. To manage dark silicon and exploit continuing transistor density increases, this trend will only accelerate, with a dizzying array of heterogeneous chips and devices challenging code portability and performance tuning. Finally and most worrisome, our device physics abstractions are breaking down, with device physics affecting architecture. We face real silicon reliability, process variation and energy challenges, and near threshold voltage computations emerging.
Holistic Collaboration
The abstractions of device physics, processor architecture and programming models naturally led to a corresponding intellectual specialization. Consequently, those working in materials science and device physics have little in common with those designing software tools. I believe that must change if we are to address our challenges.
We need new, integrated design methodologies and rich collaborations that consider end-to-end design and capability. In many ways, this is a return to the past, necessitated by the desire for continuing increases in performance, mediated by the desire for reliability, scalability, programmability and economic feasibility. It will not be easy, but it is a way forward.
Comments