In Artificial Intelligence We Trust?

We are technological toddlers. Over the past century, while we transformed our environment to better suit our needs, we blissfully ignored the havoc we created: industrial chemicals that brought good things into our lives now contaminate every nook and cranny on earth; we simplified our food into sugar, salt and fats wrapped up in cellophane and are now mired in an epidemic of obesity; and, having become addicted to fossil fuels, we are powering ourselves towards planetary disaster. We need some guidance here on earth. We need an entity that can solve problems too complex for our human minds. An entity that can help us make better decisions; maybe even better predict the negative consequences of our actions. Computers encoded with algorithms that can learn, may someday soon be able to help us confront the short-sightedness brought about through our own hubris.

Although technologically, computers may still be toddling along like us, already their computational skills have surpassed some of the best human strategists alive. In 2015 Google’s AlphaGo outmaneuvered a top player at Go, an ancient game of strategy and intuition, considered far more complex than chess. And, unlike us, Artificial Intelligence (AI) agents, especially those based on a subset of AI called Machine Learning are poised to mature – rapidly. There are now machines that can program what we cannot; and as the algorithms become ever more complex, some are now being developed to explain to us how they do what they do. Because it isn’t always evident. But more about that in a bit.

computer 3As with any new technology, artificial intelligence comes with a caveat. How far do we trust an inscrutable entity? Recently my sister Susan drove to Albany, NY from Boston. She’s not an luddite, but she refused to rely on Siri. (“We all know you shouldn’t trust those directions” she said, before setting off, despite my suggestion that it was highly unlikely she’d wind up on a dead-end road with no food or water for a week in Albany, NY.*) Instead she opted to rely on her own brain and paper maps. She got lost. Eventually she found someone knowledgeable and kind enough to talk her step-by-step through the city to the funeral home where she was headed. “Now, there’s an app,” she said. “Someone who can talk you through directions.” If Siri was sentient she might be offended because that’s exactly the purpose for which Siri is designed. Trust is an important issue in the world of AI. Here is a bit from an article published in Backchannel by Steven Levy about how Deep Learning (a form of machine learning that uses neural networks modeled on those found in the human brain) has made Siri less robotic sounding and more human: “Though it seems like a small detail, a more natural voice for Siri actually can trigger big differences. ‘People feel more trusting if the voice is a bit more high-quality,’ says [Dr. Tom] Gruber. ‘The better voice actually pulls the user in and has them use it more. So it has an increasing-returns effect.’” Gruber co-founded Siri, Inc. and serves as its Chief Technology Officer.

Whether we realize it or not we are engaging with these “deep neural networks” of AI even when we aren’t turning down the odd side street or crossing our fingers that we won’t end up on the wrong side of town. They engage with us whenever we venture online, including Google search results and movie suggestions on Netflix. They even deduce we might be in need of underwear that promises not to leak (not that I’ve ever seen such ads.) And AI isn’t just for consumers of movies or shoes or whatever pops up on our screens, there are plant pathologists developing apps that can diagnose crop diseases in minutes anywhere on the planet (within reach of a cell signal); and someday soon there may be apps that recognize skin cancers. Deep Patient is program aimed at diagnosis based on medical records. When it was applied to over 700,000 records, the results indicated it boosted the ability to predict disease. But there is still this issue of trust. Would you trust a computer to diagnose you?

The capacity to trust in something even if a machine – is appealing, particularly these days when trust is at a premium. But are these machines trust-worthy? And how do we know?

In his article The Dark Secret at the Heart of AI, Will Knight, contemplates the pros and Computer 2cons of technologies that can not only diagnose disease and send advertisements our way but also drive cars and wield deadly weapons. One of the most disconcerting problems is, as Knight writes, how this AI technology does what it does, is an unknown, a black box. And for now, unknowable. Additionally, the proprietary nature of the software often does not permit external scrutiny by outside experts, leaving the results—and the legal liability—in the hands of lawyers, courts, and policy makers[S4]. So when something goes wrong – there’s won’t be an explanation. Think about it. When there’s an accident, a structural failure, an experiment gone awry, after the initial “Oh Shit” moment, what’s our first question? How did that happen? Take disease prediction. “If something like Deep Patient is actually going to help doctors,” writes Knight, “it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.”

One effort to break open the black box was Google’s Deep Dream, developed in June 2015. The algorithm turned image recognition technology on its head so that instead of recognizing images the algorithm generated images. Like a surgical procedure, the process provided researchers with clues about the internal mechanics of how the algorithm captures the “essence” of a cat or a cloud or a face, shedding light onto the workings of the computational brain. This electronic dissection may be one method allowing computer engineers, like those at Google Research, greater ability to fine tune their learning models. For example, when an unsettling melding of the inanimate and the animate occurred: a fusion between a dumbbell and a human arm, engineers concluded that the neural net apparently hadn’t seen enough dumbbells simply resting on the floor. And there-in lies the problem. Knowing what goes into a program is one thing. Knowing what is missing is more difficult.

dumbbells

Image Source: https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

Going forward, as researchers continue to probe and ask these deep learning algorithms to explain themselves, AI technology will continue maturing. It’s likely that as the technology matures, so too the rapport with its creators.

*The author has actually ended up way off track in the middle of Ohio with no service and no map because her GPS couldn’t handle a major but temporary road diversion. Fortunately she had enough food. She does recommend carrying a map.

Thank you Shannon Bohle, who is dedicated to AI for a better world, for your review of the technical bits this all gets very complicated very quickly. Well beyond the capacity of my human brain.

Advertisements
%d bloggers like this: