Maynard on empathy LEAD
It’s hard to miss the drumbeat: Self-driving cars are coming. Self-driving cars are coming.

You may have heard about self-driving cars when Google was first on the road with a clunky, driverless vehicle to test the new technology. These days practically every major auto company seems to be getting in on the act, be it for driver-assisted operations or fully autonomous vehicles.

This has led to plenty of questions about the possible impacts, good and ill, of this dramatic new technology. Could they be hacked, putting passengers at risk? Will taxi, bus, and truck drivers soon be out of a job? How will these vehicles be regulated? Who is liable when crashes occur? And what will the fast-moving, self-driving car do when it must choose between colliding with an oncoming vehicle or turning and hitting pedestrians?

You probably have not been asked what you think about these or other such questions, and that is a shame. This rapidly arriving new technology has the potential to transform our world, reducing road fatalities and changing notions of what it means to be free.

And it’s not just self-driving cars. As the convergence between new technologies accelerates in what some are calling the Fourth Industrial Revolution, it’s worth reflecting on how essential it is for scientists, engineers, and technologists to ask questions about the potential societal impact of their inventions. You know, before it’s too late.

The challenge is to introduce what I call “actionable empathy” into the development of new technologies.

Empathy is the ability to experience someone else’s feelings—their fears, hopes, and aspirations—from that person’s perspective. Most of us know what it’s like to identify with someone else who’s experiencing strong emotions or facing extreme circumstances—to empathize with them. It’s an essential binding force within society that enables us to identify and respond to another person’s condition, without experiencing directly what they are going through.

Empathy is not a term we often use when it comes to innovation. Yet at its core, empathy is the ability to create shared understanding where there is a lack of shared knowledge, and this is where it becomes a powerful tool in technology development. An empathetic mindset may be a critical determinant of whether this new wave of convergence yields a better world or one wrought with grave dangers spurred by a failure to consider unintended consequences.

Consider the growing sophistication of gene-editing techniques that enable us to alter human embryos, and of cloud-based artificial intelligence that can listen in on your conversations and act on what it thinks you need before you know yourself. Or think about brain implants that can help manage previously incurable neurological diseases, but can also alter moods at the flick of a switch.

These are complex, potentially world-altering new possibilities. Do the principal players involved in their development possess the right skills, incentives, and tools to ask more than what problems the technology will solve and whether it will work? Can they also identify the risks and address how the technology might add physical, emotional, cultural, even spiritual value to individuals and society?

No single person, or group of experts even, can hope to fully understand from their own perspective how technology innovations will impact diverse individuals, communities, and cultures. Nor can we expect experts working on launching a new technology to be its chief critics or reviewers. These experts need others’ empathy to reflect on potential futures from different perspectives. The problem is that there is no universal common language for understanding the societal risks and benefits of these innovations—and so non-experts are left out of the development equation, even though they may be profoundly impacted by the new technologies and may have critical insights into avoiding pitfalls.

This is where empathy that supports informed decisions—actionable empathy—becomes a powerful mechanism for ensuring all of us have a say in how emerging technologies progress.

To understand how this might play out, consider the case of human gene editing. The scientific community is rapidly reaching a point where inheritable traits can be engineered into human embryos with relative ease. This technology is so controversial that the scientific community is already discussing where the ethical boundaries lie. But, as yet, there is relatively little societal engagement around the technology.

Imagine a scenario in which researchers engage at an early stage with individuals who have strong views for and against human gene editing. Now imagine if the researchers develop a way to accurately describe the hopes and concerns of these individuals, including what they think the consequences of the technology might be. They would develop the technology with knowledge of its real-world impact, and might be able to incorporate what they learn into how they create and describe the technology.

In many ways, actionable empathy is a facet of enlightened self-interest. Increasingly in today’s interconnected world, products cannot succeed long-term if they fall afoul of vocal communities. This was starkly seen with genetically modified foods, where blinkered commercial self-interest failed to take into account how the technology—and the perceived motivation behind it—would be received by different communities.

Without such actionable empathy, technology innovation becomes driven by individual or corporate aims, while societal good remains a side effect rather than a planned outcome.

For thousands of years, human history has been dominated by emerging technologies that create the problems that the next wave of innovation helps resolve. So far, we have avoided widespread catastrophe. But as our world becomes more interconnected and our technologies more integrated and powerful, we are in danger of losing the ability to handle the challenges that self-serving technology innovation creates.

This brings us back to self-driving cars. Imagine a future where the tedium and risks of driving are all but eliminated. Now imagine that, in this future, people who cannot drive, or are uneasy driving, or cannot afford to drive, have access to self-driving cars. And imagine that these cars empower their “drivers” in more ways than conventional cars ever could—just as smart phones have empowered us beyond the wildest dreams of previous generations.

This is a future that’s within our grasp. But only if researchers, designers, manufacturers, and others understand deeply what car ownership means to people. And such change presents challenges. To many of us, the cars we drive are a symbol of freedom, and an outward expression of who we are. They enable us to go where we want, when we want, with us quite literally “behind the wheel.” I suspect that, as the possibility of widespread—and eventually obligatory—automation comes closer, some will see the technology as threatening their liberty and their sense of identity. Those developing the technology will need to understand this perceived threat, and that requires engaging with all kinds of people, empathetically.

Technology innovation can produce a self-driving car that works. But only empathy will lead to one that people actually want.

Andrew Maynard is a professor in the School for the Future of Innovation in Society at Arizona State University, as well as Director of ASU’s Risk Innovation Lab.

*Photo courtesy of smoothgroover22.