In the early dates of the automobile, there was a lively competition among disparate technologies for hegemony as the motive power source. Steam engines were common, given their history in manufacturing and locomotives, and electric vehicles trundled through the streets of many cities. The supremacy of the internal combustion engine as the de facto power source was by no means an early certainty. Yet it triumphed due to a combination of range, reliability, cost and safety, relegating other technologies to historical curiosities.
Thus, it is ironic that we are now assiduously re-exploring several of these same alternative power sources to reduce carbon emissions and dependence on dwindling global petroleum reserves. Today's hybrid and electric vehicles embody 21st century versions of some very old ideas.
There are certain parallels to the phylogenic recapitulation of the automobile now occurring in computing. Perhaps it is time to revisit some old ideas.
Use of the word "computer" conjures certain images and brings certain assumptions. One of them, so deeply ingrained that we rarely question it, is that computing is digital and electronic. Yet there was a time not so long ago when those adjectives were neither readily assumed nor implied when discussing computing, just as the internal combustion engine was not de rigueur in automobile design.
The alternative to digital computing – analog computing – has a long and illustrious history. Its antecedents lie in every mechanical device built to solve some problem in a repeatable way, from the sundial to the astrolabe. Without doubt, analog computing found its apotheosis in the slide rule, which dominated science and engineering calculations for multiple centuries, coexisting and thriving alongside the late comer, digital computing.
The attraction of analog computing has always been its ability to accommodate uncertainty and continuity. As Cantor showed, the real numbers are non-countably infinite, and their discretization in a floating-point representation is fraught with difficulty. Because of this, the IEEE floating-point standard is a delicate and ingenious balance between range and precision.
All experimental measurements have uncertainty, and quantifying that uncertainty and its propagation in digital computing models is part of the rich history of numerical analysis. Forward error propagation models, condition numbers, and stiffness are all attributes of this uncertainty and continuity.
I raise the issue of analog computing because we face some deep and substantive challenges in wringing more performance from sequential execution and the von Neumann architecture model of digital computing. Multicore architectures, limits on chip power, near threshold voltage computation, functional heterogeneity and the rise of dark silicon are forcing us to confront fundamental design questions. Might analog computing and sub-threshold computing bring some new design flexibility and optimization opportunities?
We face an equally daunting set of challenges in scientific and technical computing at very large scale. For exascale computing, reliability, resilience, numerical stability and confidence can be problematic when input uncertainties can propagate, and single and multiple bit upsets can disturb numerical representations. How can we best assess the stability and error ranges on exascale computations? Could analog computing play a role?
Please note that I am not advocating a return to slide rules or pneumatic computing systems. Rather, I am suggesting that we step back and remember that the evolution of technologies brings new opportunities to revisit old assumptions. Hybrid computing may be one possible way to address the challenges we face on the intersecting frontiers of device physics, computer architecture and software.
A brave new world is aborning. Might there be a hybrid computer in your hybrid vehicle?