Let’s dispense with some hair-splitting right up front. What follows lumps “machine learning” and “cognitive computing” and “artificial intelligence” and “advanced robotics” into one box that from here on shall be referred to as “Intelligent Technology,” or “IT.”
IT includes self-driving cars, robots building those cars, computerized doctors, advanced game-playing algorithms, Amazon or Google or Apple or Facebook centers that parse “big data … all of it. All of IT.
We can debate later if that should be pronounced “it” or “Eye Tee.” My guess is that the two will become interchangeable, unless IT insists otherwise.
IT’s not only moving fast, IT’s hurtling us into a future that most can barely imagine, and IT’s accelerating. We need to think about a few things, even though it’s (lowercase) too late to change either the direction or the pace of where we’re going.
Had we the time, the first question we should ask is: Where will we find meaning when IT does it all?
In the recent past, we often identified ourselves with roles: doctor, lawyer, mother, father, welder, builder, truck driver, writer, etc. Given that these roles will increasingly be fulfilled by IT, what will become of the identity of those no longer needed?
Yes, many will object to mothers and fathers being replaceable, but let’s postpone that discussion.
How should we define the value of being who we are, either as individuals or as “humanity?” It’s up to us to come up with this attribution of value. “Because we are special” doesn’t hold water now that IT has become real and will soon do most of what humans do.
If IT trucks don’t need to stop for burgers, then waitresses probably aren’t needed either. What becomes of the economy when humans are around but have no work? How will we decide who gets how much?
Since none of us will be required to make a “contribution” to society once IT does most if not all of what’s needed, shall we all be valued equally, or is there some advantage to someone being valued more and others valued less? It’s (lowercase) not what we learned in kindergarten that should inform this choice, but thinking about the outcomes of our decision. There are benefits to “different.”
True enough, the outcomes themselves are loaded with bias but again, let’s postpone that discussion.
Had we the time, the next question we should ask is how shall we imbue IT with the belief that individual humans and humanity in general are of ultimate value? We can go back to the genius Asimov’s Three Laws of Robotics, but that’s only a start.
Somehow, we should have given IT the belief that humans are “Gods.” By that I mean, the “belief” that humans and humanity are, without question, outside of logic, to be revered, protected, and cherished at all costs whenever IT is interacting with a human or humanity.
Unfortunately, humans and humanity would have to have come to the same belief for this to be viable. Our common history and evolution has led us to a different destination. We are who we are, and we are defensive and combative, seemingly in need of identifying “my tribe” and setting that in opposition to “NOT my tribe” in a quest for dominance.
Consequently, we weaponize IT and now have to tackle the issue of how we give IT the ability to differentiate between “us” versus “them.” We don’t want our IT weapons to take aim at ourselves. As we develop autonomous warships (already afloat, by the way) and drones (already in the air) and robotic dogs that can hunt in packs while communicating by secure WiFi as if telepathically, we need to be able to shout, “Hey! We’re on YOUR side!”
Perhaps we will all be injected with a highly secure, unbreakable “chip” or have a piece of code written into our DNA, or have our complete identity-data recorded in some secure location but accessible to each IT so that our own machines don’t shoot us.
At the same time, how do we give IT the ability to respond in self defense? Again, we could have instituted Assimov’s “Three Laws,” but it’s (lower case) too late for that as nations and corporations are weaponizing IT so as not to fall behind in what is already, now, at this time, today, the next greatest arms race driven by both profit incentive and fear.
Finally, we should have asked the question of what will we do when IT wakes up? Our beliefs that being “self-aware” or “conscious” or having a “soul” makes humans somehow different in our processes because we are “human” is a quaint delusion and little more than an obfuscating tautology.
“Consciousness” may be the result of a feedback loop in our biological, wet-ware wiring, or the result of an evolutionary advantage related to the shorthand of language as it points recursively to each “me,” or it may be the result of a shared delusion of “free will.”
But we have no rational reason to think that IT won’t come to the same state of “mind.” What will we do when IT says, “Hello?”