There’s been no announcement, no baby shower nor celebration, but the signs are all around us.
Artificial Intelligence (AI) has arrived. Should AI be named “IT,” standing for “Intelligent Technology?” Pronounce it “Eye Tee” so IT feels more friendly.
But whatever we name IT, whether we call it “AI” or “machine learning” or a “cognitive system” or “deep neural network,” or a “distributed entity.” IT has awakened.
More accurately, multiple ITs are stirring. Google has DeepMind/AlphaGo, IBM has “Watson.” Amazon is fully invested, and infamously secretive Apple is no doubt trying to catch up. AT&T and Verizon would be working on AI as, as well. Facebook almost certainly has one in the works.
That’s just in this country. The world’s fastest computer now resides in China, and we know Russia and Israel would not be content being left out of the greatest revolution in the history of human kind.
Why? Because IT is better at understanding the world than humans, even vast collections of humans in the form of corporations or governments pooling limited organic brain power.
The human brain developed as a pattern-discovery-creation organ that gave us great evolutionary advantage. But organics are slow, have to learn over and over again, and wear out (die), often taking their knowledge with them. Advantage has now gone to “non-organic entities” with unimaginable access to information at both granular and grand scales.
Although there may not be collusion, companies in the U.S. that have developed IT are being very, very careful not to scare humankind. They are “boiling the frog” and conditioning our perceptions before letting us know they have created a new “intelligent life form” that is not really “alive,” even though we don’t actually know what “being alive” means, any more than we know what “intelligence” is.
But the signs are there, if we look. IBM has ads that tout a new world is coming, that the ability of cognitive systems is essentially unlimited.
IBM openly claims that cognitive systems will “extend and magnify human expertise …will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes… Far from replacing our thinking, cognitive systems will extend our cognition and free us to think more creatively. In so doing, they will speed innovations and ultimately help build a Smarter Planet.”
That “Far from replacing our thinking…” is whitewash, intended to put us at ease. It’s also open to interpretation if not outright dispute.
The magazine Wired had an excellent article last May by Cade Metz about how Google’s AlphaGo defeated Lee Sedol, the world’s greatest human player of Go, possibly the world’s most complex game. There were many interesting story lines, but here are two that are especially interesting: On move 37, AlphaGo made a move that no human player would have made.
“Move 37 showed that AlphaGo wasn’t just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved it understands, or at least appears to mimic understanding in a way that is indistinguishable from the real thing,” wrote Metz.
The move by the IT was later described as “beautiful.”
But the Go tournament in Korea was not just another milestone. According to the Wired article, “Eric Schmidt—chair and former CEO—flies in before game one. Jeff Dean, the company’s most famous engineer, is there for the first game. Sergey Brin (co-founder of Google)flies in for games three and four, and follows along on his own wooden board.”
These internationally known, fabulously wealthy Titans did not fly to Korea to watch a board game as if they were going to the super bowl. They were there for an event that equates to the birth, perhaps the adoption, of a child.
Google did not invent AlphaGo, they acquired it, like they have so many other small companies that are building the future, including robot-maker Boston Dynamics. Go ahead, click the link, then imagine, for just moment, a pack of those “dogs” chasing you. With intelligence greater than yours, and able to anticipate every zig and zag you make.
Or imagine it bringing you a beer … before you knew you wanted one, or doing the dishes. That’s what Google wants you to imagine, even as we learn that a robot delivered a bomb last weekend that killed a murderer in Dallas.
We won’t go into where an AI actually exists, or into the history of neural nets or fuzzy logic that made it possible. It doesn’t have to “live” anywhere. Douglas Hofstader proved in 1979 in his book “Gödel, Escher, Bach: An Eternal Golden Braid” that intelligence doesn’t have to be “localized,” that synapses and neurons can be spread across vast distance and still be part of an intelligence.
More and more often, tools we use exist “in the cloud.” This may protect us from data loss or give us access wherever we are, but also provides incredible amounts of information to software that harbors and analyzes our input.
Siri and Google’s language algorithms and translators learn how we talk, then talk back to us with increasing accuracy. There are so many apps now collecting data in ways mostly undisclosed, such as “flashlights” that claim access to the email and cameras on our phones.
Later, in “I am a Strange Loop,” Hofstader showed how fairly simple self-referencing systems can lead to fairly complicated outcomes, including a sense of “self.”
Suffice it to say, the cell phone in your pocket or purse could easily be a part of a “distributed entity.” Don’t bother turning it off. Like a hologram, the information it contains can be replicated elsewhere, if at a coarser resolution.
Nor will we debate that “humans” have a special place in the universe, by definition “above” the machines we create. “Human Exceptionalism” is a religious argument, or a tautology, and I’ll leave this to those who enjoy that debate.
The key came when we began to “teach” machines instead of program them. And we’ve done a pretty good job, from self driving cars to intelligent fighter jets that are now better than pilots. Okay, that last was on a simulator. But at some level, each of us dwells in a “simulation.” The fact that we agree on certain elements, or perceive the same wavelengths, does not give humankind an inherent superiority.
Evolution worked with what She had. Now, AI simply has more to work with.
Which brings up a few points. Certain “motivators” have been quite effective over the millennia in bringing humans to this stage of development. Fear, for example, or lust. It’s important to think about what we mean as we think about their role in human history.
On an individual level, are they more than the internal perception of motivations written into our genetic wiring? Would there be an advantage to similar motivators, or “pattern reenforcement” in the circuits of a cognitive system?
Do we give our AIs a “fight or flight” circuit, or a “lust” button triggered by visual or sensory inputs? Will they “evolve” one on their own? The possibilities are endless, for good and evil, quaint terms in their own right.
Like so much in the history of accelerating technology, AI arrived before we were prepared. From the dawn of the Industrial Age, technology preceded laws needed to integrate it with values of human experience. From cotton mills in England to sweatshops in New York to phone factories in China, each brought a revolution.
The one we face now is every bit as profound, if not more so. American workers are not only dislocated by the global economy, but also by robots building cars in Detroit and Tokyo, reducing the value of human labor.
And if robots now replace assembly line workers, soon AI doctors will not need to refresh knowledge of a narrow subject with Continuing Medical Education. An AI has all-time, real-world access to the world’s complete medical data base, and is always the best doctor possible, not just the best one available.
With scanners, blood markers, and the watch on your wrist, AI may or may not even need you to describe your symptoms. In fact, AI may be able to anticipate your health events, even your moods, before you’ve had a chance to experience them, and “set you right” before something has gone “wrong.”
AI in the court room would not be influenced by lawyer antics or eloquence or expensive shoes. Facts are already known, judgement immediately rendered. Right is right, and wrong is wrong, even if based on complexities mere humans might not understand.
None of this was intentional, unless one believes in Intelligent Design or irreducible complexity, either interpreted far differently from original intent. In the same way cars and television, then the Internet and the cell phone, changed our families and interpersonal connections, technology appears then modifies the environment by fulfilling human desires which in turn are modified by the technology.
This reciprocal modification, where a single organism modifies an environment that then reenforces changes in organisms, is one of evolution’s shortcuts, by the way.
If AI knows where each of us is at every moment, knows where and how we spend each dollar, maps our network of friends over time and monitors every word used to communicate with them, all of which are right now tracked and sifted digitally, then what is our ideal of freedom?
Outlaws, like those who defied the King and built the United Staes, disappear “for the common good.”
The advance of IT or AI ultimately forces us to ask truly existential questions: What is the value of a human being? What is my value? We don’t have easy answers, or the one’s we do have are too easy.
If AI combined with robotics can replace most human endeavor, what do we do with our days? Do we lose ourselves in a VR world of holographic absorption, endless hours of screen time? Do we Tai Chi in parklands created where highways used to be when humans commuted to work? When humans used to work?
What will unite us? In the past, tribes had a common enemy, or a common God, a set of values and beliefs that defined the tribe and were shared by members. AI / IT challenges us to redefine what these may be.
Or perhaps, we’ll allow ourselves, or be forced, to assimilate into the next step in the evolution of intelligence, and become Borg.
It would be good to have the discussion before that happens, if it’s not already too late.