A friend, a lawyer and a judge, sent a link about a company in San Francisco trying to replace lawyers with robots. A professor at Massachusetts Institute of Technology said artificial intelligence couldn’t tackle more than 10 percent of legal issues with today’s technology.
I wrote back saying that I didn’t think the law would be that difficult, since law was encoded in words with rules applied that resulted in patterns of outcome. I envisioned case studies and decisions, the history of law in the U.S. going back to the Constitution, being fed to the machines which would learn it in about five minutes.
”… it is just a bit more complicated than that!” the Judge replied.
He pointed out that judges and lawyers often bring more than case law and rules to their arguments. They think about how a decision might affect society, bring inputs from their lives that are not part of any legal history.
To my statement about law just being language, he pointed out that language is not always clear. When teaching, he used to show pictures of a stump. a stool, a kitchen chair, a toilet, and a throne. Which was “chair?” he would ask his students. Where did “chairness” begin?
One of my favorite arguments also involved “chairs.” Is it a kitchen chair, or is it “legs, seat, and back,” or is it “steel, wood and plastic,” or is it “carbon, iron-nickel, and heterochain polymers?” The answer is, yes.
So the simplicity of “chair” quickly becomes more complicated. To say that “The Law” is just words and rules was an oversimplification.
An article in Quanta Magazine covers research at University of California Berkeley to give AI “curiosity,” or a “reward which the agent generates internally on its own, so that it can go explore more about its world,” according to one of the researchers.
One problem was that the AI could get “stuck” in an environment that offers too much stimulation. So researchers engineered their AI to “translate its visual input from raw pixels into an abstracted version of reality. This abstraction incorporates only features of the environment that have the potential to affect the agent…” wrote author John Pavlus.
I suggest that human intelligence does the same thing and among our primary mechanisms of abstraction, or filters, are… words. Words describe not just what “is,” but “what is not.”
When we teach infants to speak, we teach words, but also contexts and associations. The wiring of the brain forms patterns that associate the feeling of hunger with the word “breakfast.” We associate furry with dog. Some patterns are reenforced, others wither. The word “dog” is not associated with pancakes.
Repetition of words create “fields of context.” Listeners bring unstated contexts, conscious and subconscious, to conversations about things even as simple as “chairs.” This unstated understanding between speaker and listener allows one to understand what is meant by “chair” in different conversations without elaboration.
It’s also a source of friction, when the context brought by listener is not the same as that of speaker, such as when discussing “love.”
As we extend the reach of AI’s by giving them “curiosity,” and perhaps someday “love” and “lust” and “fear” and “anger,” along with tools to seek and avoid, these entities will need to abstract their environment with ever more effectiveness. Some of the filters will be words, which will reference “things” or patterns or contexts and allow them to read and understand the entire history of law by comparing inputs to outcomes.
The judge points out there may be difficulty with irony, and I admit there is one arena that may elude Artificial Intelligence longer than others. This was illustrated by the philosopher Marx in the last century, when he said, “Time flies like an arrow; fruit flies like a banana.”
According to Dolson’s Theory of Comedy, all humor is based on something being “out of context,” and humor is our way of communicating to each other similarities in intelligence. But patterns of brain activity are becoming ever more accessible to scientists who may soon be able to “see” brains work and predict thinking. In other words, read minds: Will they see the joke?
While the law may be accessible to machines in the not-to-distant future, we’ll know machines really “think” when they’ve learned to laugh.
* (Groucho Marx, 1965?)