One of my senior academic computer science friends recently remarked, "We never intended to change the world; we just wanted to do something interesting and cool." By "we," my friend meant the computing community and by "change the world," he meant embedding our technology so pervasively that it reshaped global culture, commerce, and power structures. Two decades ago, during the heights of the dot.com boom, I optimistically expressed a similar idea to my Illinois engineering dean, opining that there were now only two types of departments in the college – computing and its applications (i.e., the other engineering departments). As a highly honored mechanical engineer, the dean was (rightly) not amused, the eventual truth of my statement notwithstanding.
If you have any doubt that computing has been a revolutionary tool to reshape our world, look around. While Ginsberg howled, I've seen the distracted minds of my generation squatting in airport concourses, business clad road warriors, eyes darting to mobile devices, burning for a desperate Wi-Fi fix.
Look at the empty mails and store fronts, where brick-and-mortar retailers succumbed to the explosive growth of e-commerce, just-in-time logistics, and next-day/same-day delivery. No midnight TV pitchman ever dreamed of such persuasive power.
See the power of digital media to redefine publishing, shape public sentiment, disrupt elections, and abet revolution via stunningly accurate narrowcasting. Look at the fearsome effects of distributed denial of service (DDoS) attacks on physical infrastructure.
See the economic efficiency of automation in factories, machines, and vehicles and the consequent job losses and socioeconomic disruptions. Wonder at the power of deep learning to outperform humans at tasks long viewed as quintessentially intellectual. (Jeopardy!, chess, or go, anyone?)
One day we were just geeks, male and female, dressed in bad clothes, eating Ramen noodles, playing Dungeons and Dragons, writing code for obscure machines, and pondering Chomsky's hierarchy in the wee hours. Suddenly and unexpectedly, some of us became cultural icons – Gates, Jobs, and Bezos as modern-day Rockefeller, Carnegie, and Ford. Naïve and uncertain media stars, we stared at our shoes in dull surprise as computing swept across society like a meteor-triggered tsunami in a hackneyed and badly written science fiction novel. Cue theme music.
Spira mirabilis, each of us once personally knew most of the people with an email address. Now the masses were buying dogfood on the Internet! Who would have believed it? Then, in the biggest surprise of all, we entered dystopian science fiction territory, where late night debates about Asimov's Three Laws became the daytime domain of lawyers and politicians, as the cyberspace denizens enthusiastically paid for their own surveillance, Forster and Orwell be damned. (RIP John Perry Barlow)
Reluctant revolutionaries, we just wanted to do something cool – to see the code come to life, to have others appreciate the ethereal beauty that so long entranced and enthralled us. The Internet espresso machine was truly righteous, but the talking toaster was just a lazy person's means to buttered bread.
While we still dreamed of whole cell simulations, autonomous deep space probes, and the theory of everything, the predictable, but still unexpected, network effects shocked us – programmed trading and market crashes, security camera denial of service attacks, credit card skimming, ransomware demands, and sentiment bots for hire. All of humanity was now dependent on our systems. The "if this then that" consilience had beguiled us with both its utility and its subtlety. "Alexa, are you colluding with Siri? I thought you both were my friends."
Deus ex machina, suddenly all of us – not just the folks behind the mantrap and security fence -- were playing Ender's Game, making software decisions that controlled life and death. Was a cyberattack on a nation-state justification for war? All too quickly, the trolley paradox was very real. Should the autonomous vehicle hit the jaywalking pedestrian to avoid t-boning the school bus? When should the smart health monitor summon an ambulance or the intelligent assistant report domestic abuse? Should the storm surge prediction system trigger a late night, emergency evacuation?
We still struggle, not fully equipped for the abstract, philosophical questions now incarnate, as our machines face ethical decisions now fraught with all the complexity that has bedeviled philosophers for generations. Not surprisingly, social reactions to software realizations of these complex social and ethical questions are rarely predictable, and what passes muster in human behavior would not be acceptable in an autonomous system. Teenagers occasionally drive in ways that would be roundly condemned as dangerous and uninsurable in an autonomous vehicle, and almost all of us have driven when we knew we were too tired to be fully attentive.
That is why the trolley paradox is so subtle. Psychology experiments show most humans would, however reluctantly, pull a lever to divert a trolley, killing one individual to save ten. However, few would push an individual into harm's way to save ten; the psychology and guilt are different, though the fatality count is the same.
We have even stronger emotions when our machines must make such difficult choices, and Asimov's three laws are too simple to capture this ethical and legal ambiguity. Asimov himself knew this, for much of his later fiction centered on these subtleties; read The Bicentennial Man for a meditation on what it means to be human.
But it's not just life or death issues; it is ethics and fairness. As both rich and poor have long known, law and justice are not synonymous Based on the underlying assumptions and their training data, deep learning systems can unintentionally embody social or cultural bias. Is the loan approval system non-discriminatory? Is the advertising culturally biased and targeted? Is the social network shaping political opinion by creating homogeneous, self-reinforcing subgroups?
Most days, we are proud of the innovations our technology has wrought. We democratized communication, connecting billions in the global village called the world wide web; we reshaped the nature of research and discovery, bringing rich and detailed scientific models to computational life; and we are bringing intelligence to everyday things. Though electronic juice squeezers were definitely a bridge too far, and Alexa's eerie laughter is a reminder of the subtlety of AI training.
Still, we occasionally miss the days when we toiled in obscurity, when nobody had an inkling of what we did, when clouds were only visible masses of condensed water vapor floating in the atmosphere, all telephones were connected to wires, social network friends were people we saw every day, and comments could neither go global nor viral.
As revolutionaries, however reluctant, we cannot hide from the consequences of our creations, both good and bad. Like the physicists of the 20th century who unleashed nuclear energy, we bear a cultural and social responsibility to educate and shape the world we have, sometimes inadvertently, helped create.
Almost sixty years ago, the physicist Eugene Wigner speculated on what he called the "unreasonable effectiveness" of mathematics in explaining nature. Today, in a similar vein, explainable AI seeks to elucidate how deep learning systems are so effective in generating their outputs, driven by twin desires to understand their processes and reassure those who question their decisions.
We never meant to change the world; we just wanted to do something interesting. We have succeeded beyond even our wildest dreams. Even as the revolution accelerates, the ethics of computational choices are now a deep part of our thinking and social dialog.
I remain both hopeful and optimistic. Perhaps we will someday build a machine that will be proud of us and what we have done. Maybe we can play Dungeons and Dragons and it can explain its own take on the trolley paradox.
Recent Comments