An NYU professor explains why it's so dangerous that Silicon Valley is building AI to make decisions without human values

Advertisement
An NYU professor explains why it's so dangerous that Silicon Valley is building AI to make decisions without human values

amy webb

Courtesy of Amy Webb

NYU Professor Amy Webb, author of "The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity"

Advertisement
  • Amy Webb is a professor of strategic foresight at NYU's Stern School of Business.
  • In this excerpt from her new book "The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity," Webb explains why it's so important that artificial intelligence is built to keep human values.
  • Without more transparency about how AI "thinks," she argues, we run the risk that algorithms will start making decisions that don't necessarily have humanity's interests at heart.

In the absence of codified humanistic values within the big tech giants, personal experiences and ideals are driving decision-making. This is particularly dangerous when it comes to AI, because students, professors, researchers, employees, and managers are making millions of decisions every day, from seemingly insignificant (what database to use) to profound (who gets killed if an autonomous vehicle needs to crash).

Artificial intelligence might be inspired by our human brains, but humans and AI make decisions and choices differently. Princeton professor Daniel Kahneman and Hebrew University of Jerusalem professor Amos Tversky spent years studying the human mind and how we make decisions, ultimately discovering that we have two systems of thinking: one that uses logic to analyze problems, and one that is automatic, fast, and nearly imperceptible to us. Kahneman describes this dual system in his award-winning book Thinking, Fast and Slow. Difficult problems require your attention and, as a result, a lot of mental energy. That's why most people can't solve long arithmetic problems while walking, because even the act of walking requires that energy-hungry part of the brain. It's the other system that's in control most of the time. Our fast, intuitive mind makes thousands of decisions autonomously all day long, and while it's more energy efficient, it's riddled with cognitive biases that affect our emotions, beliefs, and opinions.

We make mistakes because of the fast side of our brain. We overeat, or drink to excess, or have unprotected sex. It's that side of the brain that enables stereotyping. Without consciously realizing it, we pass judgment on other people based on remarkably little data. Or those people are invisible to us. The fast side makes us susceptible to what I call the paradox of the present: when we automatically assume our present circumstances will not or cannot ever change, even when faced with signals pointing to something new or different. We may think that we are in complete control of our decision-making, but a part of us is continually on autopilot.

Mathematicians say that it's impossible to make a "perfect decision" because of systems of complexity and because the future is always in flux, right down to a molecular level. It would be impossible to predict every single possible outcome, and with an unknowable number of variables, there is no way to build a model that could weigh all possible answers. Decades ago, when the frontiers of AI involved beating a human player at checkers, the decision variables were straightforward. Today, asking an AI to weigh in on a medical diagnosis or to predict the next financial market crash involves data and decisions that are orders of magnitude more complex. So instead, our systems are built for optimization. Implicit in optimizing is unpredictability-to make choices that deviate from our own human thinking.

Advertisement

When DeepMind's AlphaGo Zero abandoned human strategy and invented its own last year, it wasn't deciding between preexisting alternatives; it was making a deliberate choice to try something completely different. It's the latter thinking pattern that is a goal for AI researchers, because that's what theoretically leads to great breakthroughs. So rather than training AI to make absolutely perfect decisions every time, instead they're being trained to optimize for particular outcomes. But who-and what-are we optimizing for? To that end, how does the optimization process work in real time? That's actually not an easy question to answer. Machine- and deep-learning technologies are more cryptic than older hand-coded systems, and that's because these systems bring together thousands of simulated neurons, which are arranged into hundreds of complicated, connected layers. After the initial input is sent to neurons in the first layer, a calculation is performed and a new signal is generated. That signal gets passed on to the next layer of neurons and the process continues until a goal is reached. All of these interconnected layers allow AI systems to recognize and understand data in myriad layers of abstraction. For example, an image recognition system might detect in the first layer that an image has particular colors and shapes, while in higher layers it can discern texture and shine. The topmost layer would determine that the food in a photograph is cilantro and not parsley.

The future of AI - and by extension, the future of humanity - is controlled by just nine companies, who are developing the frameworks/ chipsets/ networks, funding the majority of research, earning the lion's share of patents, and in the process mining our data in ways that aren't transparent or observable to us. Six are in the US, and I call them the G-MAFIA: Google, Microsoft, Amazon, Facebook, IBM and Apple. Three are in China, and they are the BAT: Baidu, Alibaba and Tencent. Here's an example of how optimizing becomes a problem when the Big Nine use our data to build real-world applications for commercial and government interests. Researchers at New York's Ichan School of Medicine ran a deep-learning experiment to see if it could train a system to predict cancer. The school, based within Mount Sinai Hospital, had obtained access to the data for 700,000 patients, and the data set included hundreds of different variables. Called Deep Patient, the system used advanced techniques to spot new patterns in data that didn't entirely make sense to the researchers but turned out to be very good at finding patients in the earliest stages of many diseases, including liver cancer. Somewhat mysteriously, it could also predict the warning signs of psychiatric disorders like schizophrenia. But even the researchers who built the system didn't know how it was making decisions. The researchers built a powerful AI-one that had tangible commercial and public health benefits-and to this day they can't see the rationale for how it was making its decisions. Deep Patient made clever predictions, but without any explanation, how comfortable would a medical team be in taking next steps, which could include stopping or changing medications, administering radiation or chemotherapy, or going in for surgery?

That inability to observe how AI is optimizing and making its decisions is what's known as the "black box problem." Right now, AI systems built by the Big Nine might offer open-source code, but they all function like proprietary black boxes. While they can describe the process, allowing others to observe it in real time is opaque. With all those simulated neurons and layers, exactly what happened and in which order can't be easily reverse-engineered.
One team of Google researchers did try to develop a new technique to make AI more transparent. In essence, the researchers ran a deep-learning image recognition algorithm in reverse to observe how the system recognized certain things such as trees, snails, and pigs. The project, called DeepDream, used a network created by MIT's Computer Science and AI Lab and ran Google's deep-learning algorithm in reverse. Instead of training it to recognize objects using the layer-by-layer approach-to learn that a rose is a rose, and a daffodil is a daffodil-instead it was trained to warp the images and generate objects that weren't there. Those warped images were fed through the system again and again, and each time DeepDream discovered more strange images. In essence, Google asked AI to daydream. Rather than training it to spot existing objects, instead the system was trained to do something we've all done as kids: stare up at the clouds, look for patterns in abstraction, and imagine what we see. Except that DeepDream wasn't constrained by human stress or emotion: what it saw was an acid-trippy hellscape of grotesque floating animals, colorful fractals, and buildings curved and bent into wild shapes.

When the AI daydreamed, it invented entirely new things that made logical sense to the system but would have been unrecognizable to us, including hybrid animals, like a "Pig-Snail" and "Dog-Fish." AI daydreaming isn't necessarily a concern; however, it does highlight the vast differences between how humans derive meaning from real-world data and how our systems, left to their own devices, make sense of our data. The research team published its findings, which were celebrated by the AI community as a breakthrough in observable AI. Meanwhile, the images were so stunning and weird that they made the rounds throughout the internet. A few people used the DeepDream code to build tools allowing anyone to make their own trippy photos. Some enterprising graphic designers even used DeepDream to make strangely beautiful greeting cards and put them up for sale on Zazzle.com.

deep dream tsipras merkel

Sean Gallup/Getty Images

The AI-powered "DeepDream" turns any photo or image into a hallucinatory masterpiece.

Advertisement

DeepDream offered a window into how certain algorithms process information; however, it can't be applied across all AI systems. How newer AI systems work-and why they make certain decisions-is still a mystery. Many within the AI tribe will argue that there is no black box problem-but to date, these systems are still opaque. Instead, they argue that to make the systems transparent would mean disclosing proprietary algorithms and processes. This makes sense, and we should not expect a public company to make its intellectual property and trade secrets freely available to anyone-especially given the aggressive position China has taken on AI.

However, in the absence of meaningful explanations, what proof do we have that bias hasn't crept in? Without knowing the answer to that question, how would anyone possibly feel comfortable trusting AI?

We aren't demanding transparency for AI. We marvel at machines that seem to mimic humans but don't quite get it right. We laugh about them on late-night talk shows, as we are reminded of our ultimate superiority. Again, I ask you: What if these deviations from human thinking are the start of something new?

Here's what we do know. Commercial AI applications are designed for optimization-not interrogation or transparency. DeepDream was built to address the black box problem-to help researchers understand how complicated AI systems are making their decisions. It should have served as an early warning that AI's version of perception is nothing like our own. Yet we're proceeding as though AI will always behave the way its creators intended.

The AI applications built by the Big Nine are now entering the mainstream, and they're meant to be user-friendly, enabling us to work faster and more efficiently. End users-police departments, government agencies, small and medium businesses-just want a dashboard that spits out answers and a tool that automates repetitive cognitive or administrative tasks. We all just want computers that will solve our problems, and we want to do less work. We also want less culpability- if something goes wrong, we can simply blame the computer system. This is the optimization effect, where unintended outcomes are already affecting everyday people around the world. Again, this should raise a sobering question: How are humanity's billions of nuanced differences in culture, politics, religion, sexuality, and morality being optimized? In the absence of codified humanistic values, what happens when AI is optimized for someone who isn't anything like you?

Advertisement

Excerpted from: The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb. Copyright © by Amy Webb. Published by arrangement with PublicAffairs, an imprint of Hachette Book Group.

Get the latest Google stock price here.

{{}}