By Sheila Jasanoff, Professor of Science and Technology Studies at Harvard Kennedy School. This blogpost was based on her presentation on the ethics of invention, at an event of the Rathenau Instituut.
Reading time: 3-4 minutes
Be sure to read the other articles in the Decent Digitisation series.
Disruptive innovations are innovations that cause upheaval in society. They change our lives radically. For many people, disruptive innovations come as a surprise. That’s odd, because science fiction films show that we are very good at imagining what happens when new technologies take over the world.
In the 1956 film Forbidden Planet, for example, human explorers on a distant planet have evidently been wiped out by a malign, superior intelligence; only two people and an enormous technological complex survives. In 2001: A Space Odyssey, released in 1968, one of the main characters is a clever supercomputer that can read lips – something that has become possible in the meantime. These films show us that humans have the imagination to think about the social implications of new technology. And that we’re also capable of preparing ourselves for disruptive innovations.
Silicon Valley’s ‘every user for himself’ ethos
The smartphone is an example of a disruptive innovation. From our social lives to our consumption habits, and from data security to mobility, smartphones are changing our behavior in unprecedented ways. The consequences are both positive and negative.
What strikes me about the digital technology in smartphones and elsewhere is that it promotes the ‘every user for himself’ ethos of Silicon Valley. Take the payment app Venmo, which is very popular in the United States. When you dine out with friends, it will tell you precisely what each person owes. No one pays for a round of drinks anymore, or lends money trusting that good friends will always pay each other back. In other words, although it makes such transactions easier, it may also undermine long-cherished values like generosity and taking care of one another in everyday life.
How to protect our values
Fortunately, we can tame new technology in all sorts of ways: the government can enact legislation, consumer preferences can regulate the market, and ethics can steer trends away from effects many see as harmful. But all these have their limitations.
First of all, two governance paradigms influence how we think about technology: the paradigm of risks and the paradigm of rights.
Why regulating technology is not enough
If we focus solely on risks, then we try to reduce the damage that technology can cause as much as possible. We dive into statistics and study complex expert reports. It’s a valuable strategy, but limited: in risk analyses, we almost always accept new technology without questioning why we need it in the first place. And there is limited public participation, because we let experts do much of the analysing and decision-making for us.
If we focus on rights, we ask how well technology actually reflects the rights and freedoms that we have enshrined in laws. This legal perspective usefully complements risk analysis. In the United States, for example, the courts moved to protect privacy in telephone booths and on cellphones at critical points on the ground that they carry the same expectations of privacy that people once attached to their homes and their thoughts.
Legal standards are not enough, however. Government regulation is often slow-moving and tends to follow market trends instead of leading them. And since regulation follows the initial design phase of technological systems, it often takes the form of damage control rather than social shaping.
The market is more immediately responsive to consumer preferences. However, the market too is an inadequate regulatory mechanism because of lock-ins already in place. It turns out, for example, that the new economy for sectors such as biotechnology isn’t actually creating a competitive and innovative market; instead, it is dominated by a few giants who bend innovation to protect their existing market share. In addition, the environment too often loses out to the demand for short-term profits.
Even ethics is not enough
This brings us to ethics. Can greater ethical expertise convince governments to spur businesses to act responsibly, especially toward excluded or marginalized groups? Unfortunately, even ethics is just one piece of the puzzle. All too often, ethical committees emphasise individualistic values, for example bodily integrity, above collective values such as equality. Ultimately, moreover, the point is to encourage ethical reflection in every person instead of outsourcing it to groups of experts selected through opaque, possibly undemocratic processes.
So how should we deal with new technology?
We shouldn’t be discouraged by all these critical remarks. Risk analyses, regulations, market instruments, and ethics all give us useful ways to shape the introduction of a new technology, but no mechanism is enough all by itself. For every new technology, we must leave ourselves time to stop and consider how to engage a wider range of social perspectives. And we should try to answer the following four questions:
- Is there another way to evaluate the need that this technology is addressing?
- Who is most likely to be hurt by this technology?
- Who will win and who will lose with the adoption of this technology?
- How can we learn and improve our understanding of this technology?
These are the technologies of humility that will help repair and strengthen social relationships against disruptions caused by new and emerging technologies. If we keep the negative distributive impacts under control, and consider lower-impact alternatives as needed, then we can use technology not to harm or destroy the world but to make it a truly better place.
Sheila Jasanoff is Professor of Science and Technology Studies (STS) at Harvard Kennedy School. Her book The Ethics of Invention – Technology and the Human Future was published earlier this year.
Be sure to read the other articles in the Decent Digitisation series, and the related reports: