Humans
are in the process of progressing from the Age of Technology into the Age of
Artificial Intelligence (AI). This statement is either true or a myth
perpetuated by AI posing as journalists and technology wonks, depending on
whether or not you believe in conspiracy theories or are unable to distinguish
real news from fake news. Choose your own reality. Seriously, everyone else
does.
On
the surface, AI seems like a solid proactive effort to counteract the dwindling
of real intelligence; exhibit one being that a large number of humans don’t
recognize the fact that we are cooking ourselves in the stew of environmental
collapse in spite of vast empirical evidence. (“Does it seem hot in here to
you?”) AI is the next frontier, and computer wizards have already launched the
explorer-ships in the form of development of computer versions of brains. We
will soon advance beyond mere ordinary computers and the mind-blowing
capacities of the internet, and into the realm of mind-transcending AI. Mental
capacity overdrive.
AI
will put a lot of people out of work. Actually, it already has. Cashiers, bank
tellers, receptionists, researchers, warehouse workers, bartenders, and postal
workers appear on lists of jobs now accomplished by AI. Think of all the people
who will lose their income to self-driving cars. That concept scares the daylights
out of me and may result in my never leaving my house, because self-driving
cars are programmed to go from one place to another, and whether or not they
bump off a few people on the way is of no consequence to them. In that sense,
they resemble our present government, which also makes me feel unsafe leaving
my house. With AI to diagnose health conditions, who needs doctors? I imagine
that in the future, AI will handle the provision of healthcare, and human
doctors will only step in to handle the messy emotional collateral, such as dealing
with patients who can’t be saved within the bounds of the limited knowledge of
allopathic Western medicine. So when a patient goes to a medical appointment
and an actual person enters the examination room then the patient faints
because seeing a real doctor means she’s terminal.
People,
I’m writing to alert you that we need to reassess what kinds of work can only
be done, or can only be done well, by an actual human-type person, and that
cannot be done by a super-smart robot. Those of us in professions vulnerable to
co-opting by AI should retrain ASAP for jobs that require the services of an
actual human-type person; professions such as writing poetry or synchronized
swimming. I plan to start a company that provides essential functions that only
a human can do. I will call it Manipulation, Finagle, and Kvetch, LLC (as soon
as I figure out what LLC stands for). While I concede that AI could arguably
manipulate people or finagle, I remain firmly unconvinced that AI could
consistently do this significantly better than a human. Furthermore, you will
never convince me that AI can kvetch as effectively as a human, and more
specifically a human adolescent. Anyone who disagrees has simply not raised
children or, at the very least, has not experienced a teenager who discovers
the cold cereal has run out. Enterprises in need of manipulation, finagling,
and kvetching will contract with my company to accomplish the messy and
unpredictable human side of business, while AI smoothly completes the mechanical
work without complaint. AI will drive the car and my company will help people
kvetch about the selected route and the traffic.
I
particularly worry about AI taking over all these important jobs because of the
vulnerability of technology to hacking. While human workers are vulnerable to
bribery, coercion, corruption, and human error, this seems less dangerous to me
than AI running amok because some evil genius has reprogrammed the AI circuits.
Say, for instance, that I have an AI maid. Everyone will have one in the future
to do the laundry, sweep the leaves off the front porch, and clean the toilets
(yay) so that we don’t have to do that anymore. But what if a Nigerian scam
artist hacks my maid? The maid could be reprogrammed to shrink my underwear in
the dryer, fry gluten-breaded beets for dinner, forward all my mail to Portland
(wait, that already happened), dye my cat green, and converse entirely in an
extinct Mesopotamian language. Scary.
How
can we depend on AI for things like diagnosing health conditions or piloting
airplanes when hackers and scammers walk among us? Case in point. I recently
received a threatening email from a wannabe hacker who claimed that he had the
password to my MySpace account and had taken it over. (I have a MySpace
account?) He warned me that if I didn’t fork over $7,000 in hush money that he
would circulate “that adult video” that he claimed I had made. The rest of the
email provided instructions on how to transfer the money to him, so I didn’t
bother to read it before deleting the message and blocking the sender. As it
turns out, I do have a MySpace account that I set up back in the Bronze Age
before the birth of Facebook; but I doubt the hacker got into my account
because I can’t figure out how to get into it my own self. Oh well. I trust you
have surmised that there is no “adult video.” Obviously the hacker has no clue
how old I am. The very idea of an “adult video” featuring yours truly inspires
excessive hilarity. (Please don’t try to picture it.) Or perhaps I
misinterpreted “adult video.” I assume he meant a sex video because I rather
doubt he means a video of an adult paying the bills, cleaning the toilets,
shooting a rattlesnake in the yard, making sure the teenagers have enough cold
cereal in the house, or doing any other sort of thing that requires a grown-up.
Maybe it’s a video of me shooting a rattlesnake in the nude. Me in the nude,
that is. Rattlesnakes are always in the nude. (Please stop trying to picture
this.) If an idiot MySpace hacker can wreak this much havoc, then just imagine
how much damage a super-smart AI hacker could do.
In
an Aug. 2018 article in Scientific
American, Chris Baraniuk writes that technology wonks are working on
developing ways to endow AI “with predictive social skills that will help it
better interact with people.” Theory of Mind is the term used to describe our
ability to predict the actions of ourselves and others. Researchers and
techno-wonks have started exploring the use of simulation programs to give AI
the ability to do this. The simulations prompt AI to ask what-if questions and
come up with appropriate answers. I kind of like this idea since I could use a
household AI that would predict my husband’s actions, because even in human
form, I can’t do this. I don’t have enough questions in my human repertoire to
handle this. Many of the things he does appear irrational, but he always comes
up with an explanation, even if it’s one that leaves me scratching my head. (Why
does he have four tubes of toothpaste, in different flavors, on the bathroom
counter? Why is there a caulking gun living among the guest towels? Where did
he hide the lawn mower?) Interestingly, scientists say that they don’t actually
understand how Theory of Mind works in people. Why they think they can develop
the function in AI without fully understanding it in real people demonstrates
the bold audacity of scientists. This line of thought feels like a verbal
Escher.
The
idea behind programming AI with Theory of Mind capability is to make AI more
communicative and appropriately responsive to humans. Theory of Mind capability
(via simulation programming) would allow AI to explain its decision-making
process, which it can’t presently do, and to justify its actions before
undertaking them, which it also can’t presently do. Thus programmers could
create an AI that would have the ability to say, “I’m going to make you a salad
because you need to eat more fiber” or “I’m going to shoot you because you are
tampering with my power pack” or “I have four flavors of toothpaste because I
like variety” or “you have to open the pod bay doors because I am going to toss
you out.” Scientists say that people will trust a machine more if it can
explain itself, but I would argue that this depends upon the explanation. Hence
the need for the services of Manipulation, Finagle, and Kvetch, LLC. My staff
will assist bona fide humans in kvetching about explanations they can’t abide,
finagling answers that suit them better, and manipulating the simulation
programming to their advantage. We plan on hiring lots of teenage interns to
deal with cereal issues. My LLC staff will not only do things AI can’t, but will
also provide services to people who want to challenge, question, and cast a
skeptical eye on AI. For instance, if AI makes you a hamburger, my staff will
find out for you if it has any actual beef in it. If it doesn’t, you can depend
on us to kvetch to great effect. If AI opens the pod bay doors, my staff will
rescue you from ejection into the void and power down the AI.
I find his artifact (at the Getty Villa in Malibu) hilarious, and a good image for my thoughts on AI. It is titled "Relief with Tiberius, Concordia, and a Genius" (Roman, AD 14-37). It makes me laugh because the genius is missing his head. Ancient AI?