The Wizard of Oz is a beloved and widely watched movie that brought us Technicolor, several memorable songs, and lasting cultural memes – “We’re not in Kansas anymore,” “There’s no place like home,” and “Somewhere over the rainbow.” In the movie, Dorothy’s dog, Toto, pulls aside the curtain, revealing the great and mighty Wizard as a mere mortal projecting the trappings of power through indirection and abstraction. Attempting to continue the charade, Oz proclaims, “Pay no attention to that man behind the curtain!”
Deception once revealed, however well intentioned, is mere artifice. Accordingly, magicians guard the secrets of their tricks assiduously. After all, it is how they earn a living. More importantly, a trick’s mystique makes it both alluring and intriguing. The legerdemain in a coin’s disappearance may still bring a smile to one’s face, but familiarity with the French drop lessens its mystery.
In my experience, a distinguishing feature of researchers and scholars is their interest in not just in the “trick,” but how the trick is created and performed. These are the people peering around the corner, dissecting the art and engineering beneath each Disney WorldTM theme park ride
Hence, some seek the truth, whereas others treasure illusion and, sometimes, their own self-deception. As Morpheus says in The Matrix, “You take the blue pill - the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill – you stay in Wonderland and I show you how deep the rabbit hole goes.”
Segue alert! Every teacher wants his or her students to take the metaphorical red pill and explore the rabbit hole. It is about challenging beliefs, perspectives, and illusions with new ideas and approaches. I have reflected on this dichotomy since my very first teaching experience, when I looked back at aspiring young students with both anticipation and trepidation.
As a teacher, there is no greater joy than seeing understanding in the eyes of students, when seeming magic is at last replaced by holistic insight. In short, it is about illumination and understanding, pulling back the curtain on apparent magic to reveal an unbroken chain of insights.
Architectural Magic
As a computer and computational scientist, I long ago realized that introductory computer architecture is the transformative place where computing prestidigitation becomes intellectual engineering. It is where varied computing insights unite in the big reveal – how an idea and the abstraction of an algorithm embodied in code becomes an instantiation in logic, hardware, and device physics.
Make no mistake, the rabbit hole runs unbroken from quantum mechanics through undecidability, the Church-Turing thesis and Turing machines to computability theory and formal language theory (Chomsky hierarchy). It touches the very nature of what it means to think.
First, let’s step back and remember how the intellectual puzzle pieces form the big reveal.
By the time most undergraduate students study basic computer architecture, they have often received an intellectual dose of electricity and magnetism (E&M), perhaps via the latest edition of the same Fundamentals of Physics by Halliday, Resnick (and now Walker) that I studied as an undergraduate over forty years ago. Conductance, capacitance, inductance, wave/particle duality, and quantum tunneling; it is all there. (I so treasured the joy of my E&M course that I have saved my lecture notes and worked problem sets across multiple moves and several decades.)
An introductory class in device physics, transistors, and digital logic often follows E&M. There, the rudiments of semiconductor devices and VLSI design include field effect transistors (FET) (See gate, source, and drain), MOSFET NAND gates, and flip-flops. From there, it is just a hop, skip, and jump to circuit design and logic minimization. At this point, students have architectural LegoTM blocks and some vague notion these are intellectually sufficient, but microprocessor architecture remains as aspirational and unattainable as a Lego Empire State Building.
Meanwhile, most students have also been learning new programming language(s), efficient algorithms (quicksort), and associated data structures (balanced binary trees). Years ago, the programming languages of choice would have been C or FORTRAN. (I first learned FORTRAN in the 1970s, using WATFIV; see A Feeling for the Code). Today, most introductory computing courses teach Python, with its large sets of libraries, tools, and IDEs.
What really happens when one writes a loop or invokes a procedure? Is it just an incantation, an intellectual sacrifice offered to the machine? Therein lies the reification magic of compilers, which translate code in a high-level language such as C or FORTRAN into equivalent sequences of machine instructions. (The nuance of interpreted languages such as Python is a topic for another day.)
I have found manually translating a simple piece of code (e.g., a loop that sums a set of numbers) into a sequence of assembly language instructions small enough to fit on a single piece of paper is by far the best pedagogical technique. Those symbolic assembly language instructions then map directly and intuitively to individual machine language instructions, each defining a specific hardware behavior (add, read, write, branch), but whose collective describe the intent of the high-level language code.
Trust me; we are now nearing the French drop, but there is nothing up my sleeve …
Computer architecture is where code meets hardware, and the seemingly disparate Lego pieces become a dazzling whole. (It is not turtles all the way down.) Constructed using only those elementary logic gates (aka Lego blocks), the conceptual execution of machine instructions for a simple, non-pipelined stored program computer (superscalar, out-of-order, and parallel are topics for another day) can be traced easily using only pencil and paper. (Computer Organization and Design by my friends Dave Patterson and John Hennessy is as legendary in introductory computer architecture as Halliday and Resnick in physics.)
Simple, yet elegant, these designs embody the core ideas of the von Neumann architecture and its fetch-decode-execute instruction cycle. To test their designs, many a budding computer architect has also trekked to Radio Shack (RIP) or Fry’s (one stop shopping for potato and DRAM chips), or let their fingers do the walking online, for parts to breadboard simple computer components. The more ambitious and adventuresome also stuffed circuit boards and sported solder burns as a point of pride.
The great reveal, the denouement is now at hand. From electrons through transistors and gates to architecture, and from algorithms and code to machine instructions on architecture, it all comes together. It is all there, the magic of ideas – the abstraction that is code – is now manifest in architecture, hardware, and physics. That is the “Aha!” moment, when students truly appreciate the intellectual construction, like Chartes, that both dazzles and inspires. The true wonder is that one can explain it all on a single whiteboard!
Deep and Open Questions: Not All Is Yet Revealed
The amazing construction that is modern computing rests on many scientific and engineering insights, a few of which are now feeling the strain of their age. The von Neumann model, for all of its elegance and simplicity, increasingly limits future innovation (See Nothing Lasts Forever), and the future of semiconductor technology – 7 nanometers and below – is economically and technologically uncertain. The Landauer principle and low energy (operations/joule) or even reversible computing beckon. Quantum computing labors in an uncertain aborning, brain-inspired neuromorphic computing is in its infancy, and DNA-based storage and computing are nascent.
Perhaps, the most important intellectual question of all, one deeply related to the nature of computational complexity and the partial recursive functions lies open. Does P=NP? (i.e., is the class of problems where verifying a solution requires non-deterministic polynomial time (NP) the same as the class of problems where computing the solution requires polynomial time (P))? Put another way, if the solution to a problem can be verified in polynomial time, can it also always be found in polynomial time? Most think not, but resolving the question has bedeviled the some of the best mathematical minds of the last century. (If you think you have a proof, sketch it in the margin and claim mathematical immortality.)
Make no mistake, the rabbit hole goes deep, connecting two of the greatest intellectual edifices of the last hundred years – decidability, computability, and complexity united with quantum mechanics and the standard model – via algorithms, software, and computer architecture. The rabbit hole reaches to the very foundational question of what it means to think and share information. There is undeniable magic inside, and that magic continues to transform our perceptions and our world.
Given a choice, always follow the rabbit hole. Scrutinize the man behind the curtain. Seek the big reveal. See the whole whiteboard. Therein lies magic.
Comments
You can follow this conversation by subscribing to the comment feed for this post.