Blog

It’s hard to be a moral person. Technology is making it harder.

It’s hard to be a moral person. Technology is making it harder.

Aug-04-2021
1663
0

Digital distractions such as social media and smartphones wreak havoc on our attention spans. Could they also be making us less ethical?

It was on the day I read a Facebook post by my sick friend that I started to really question my relationship with technology.

An old friend had posted a status update saying he needed to rush to the hospital because he was having a health crisis. I half-choked on my tea and stared at my laptop. I recognized the post as a plea for support. I felt fear for him, and then … I did nothing about it, because I saw in another tab that I’d just gotten a new email and went to check that instead.

After a few minutes scrolling my Gmail, I realized something was messed up. The new email was obviously not as urgent as the sick friend, and yet I’d acted as if they had equal claims on my attention. What was wrong with me? Was I a terrible person? I dashed off a message to my friend, but continued to feel disturbed.

Gradually, though, I came to think this was less an indication that I was an immoral individual and more a reflection of a bigger societal problem. I began to notice that digital technology often seems to make it harder for us to respond in the right way when someone is suffering and needs our help.



Streaming space tourism is the new reality TV

READ MORE

Biden thinks his new eviction moratorium may be doomed. Here’s why he’s trying it anyway.

Biden’s immigration policy isn’t Trump’s — but it’s still a disappointment

Why so many dead fish are washing up on Florida’s beaches

A young person putting a facemark’s elastic over their ears.

Why can’t we sleep?

Streaming space tourism is the new

reality TV

Streaming space tourism is the new reality TV

Think of all the times a friend has called you to talk through something sad or stressful, and you could barely stop your twitchy fingers from checking your email or scrolling through Instagram as they talked. Think of all the times you’ve seen an article in your Facebook News Feed about anguished people desperate for help — starving children in Yemen, dying Covid-19 patients in India — only to get distracted by a funny meme that appears right above it.

Think of the countless stories of camera phones short-circuiting human decency. Many a bystander has witnessed a car accident or a fist-fight and taken out their phone to film the drama rather than rushing over to see if the victim needs help. One Canadian government-commissioned report found that when our experience of the world is mediated by smartphones, we often fixate on capturing a “spectacle” because we want the “rush” we’ll get from the instant reaction to our videos on social media.

Multiple studies have suggested that digital technology is shortening our attention spans and making us more distracted. What if it’s also making us less empathetic, less prone to ethical action? What if it’s degrading our capacity for moral attention — the capacity to notice the morally salient features of a given situation so that we can respond appropriately?

There is a lot of evidence to indicate that our devices really are having this negative effect. Tech companies continue to bake in design elements that amplify the effect — elements that make it harder for us to sustain uninterrupted attention to the things that really matter, or even to notice them in the first place. And they do this even though it’s becoming increasingly clear that this is bad not only for our individual interpersonal relationships, but also for our politics. There’s a reason why former President Barack Obama now says that the internet and social media have created “the single biggest threat to our democracy.”

The idea of moral attention goes back at least as far as ancient Greece, where the Stoics wrote about the practice of attention (prosoché) as the cornerstone of a good spiritual life. In modern Western thought, though, ethicists didn’t focus too much on attention until a band of female philosophers came along, starting with Simone Weil.

Weil, an early 20th-century French philosopher and Christian mystic, wrote that “attention is the rarest and purest form of generosity.” She believed that to be able to properly pay attention to someone else — to become fully receptive to their situation in all its complexity — you need to first get your own self out of the way. She called this process “decreation,” and explained: “Attention consists of suspending our thought, leaving it detached, empty ... ready to receive in its naked truth the object that is to penetrate it.”

Weil argued that plain old attention — the kind you use when reading novels, say, or birdwatching — is a precondition for moral attention, which is a precondition for empathy, which is a precondition for ethical action.

Later philosophers, like Iris Murdoch and Martha Nussbaum, picked up and developed Weil’s ideas. They garbed them in the language of Western philosophy; Murdoch, for example, appeals to Plato as she writes about the need for “unselfing.” But this central idea of “unselfing” or “decreation” is perhaps most reminiscent of Eastern traditions like Buddhism, which has long emphasized the importance of relinquishing our ego and training our attention so we can perceive and respond to others’ needs. It offers tools like mindfulness meditation for doing just that.

The idea that you should practice emptying out your self to become receptive to someone else is antithetical to today’s digital technology, says Beverley McGuire, a historian of religion at the University of North Carolina Wilmington who researches moral attention.

“Decreating the self — that’s the opposite of social media,” she says, adding that Facebook, Instagram, and other platforms are all about identity construction. Users build up an aspirational version of themselves, forever adding more words, images, and videos, thickening the self into a “brand.”

What’s more, over the past decade a bevy of psychologists have conducted multiple studies exploring how (and how often) people use social media and the way it impacts their psychological health. They’ve found that social media encourages users to compare themselves to others. This social comparison is baked into the platforms’ design. Because the Facebook algorithms bump posts up in our newsfeed that have gotten plenty of “Likes” and congratulatory comments, we end up seeing a highlight reel of our friends’ lives. They seem to be always succeeding; we feel like failures by contrast. We typically then either spend more time scrolling on Facebook in the hope that we’ll find someone worse off so we feel better, or we post our own status update emphasizing how great our lives are going. Both responses perpetuate the vicious cycle.

In other words, rather than helping us get our own selves out of the way so we can truly attend to others, these platforms encourage us to create thicker selves and to shore them up — defensively, competitively — against other selves we perceive as better off.

A collection of mousetraps capturing different social media logos placed on a table.

Efi Chalikopoulou for Vox

And what about email? What was really happening the day I got distracted from my sick friend’s Facebook post and went to look at my Gmail instead? I asked Tristan Harris, a former design ethicist at Google. He now leads the Center for Humane Technology, which aims to realign tech with humanity’s best interests, and he was part of the popular Netflix documentary The Social Dilemma.

“We’ve all been there,” he assures me. “I worked on Gmail myself, and I know how the tab changes the number in parentheses. When you see the number [go up], it’s tapping into novelty seeking — same as a slot machine. It’s making you aware of a gap in your knowledge and now you want to close it. It’s a curiosity gap.”


Plus, human beings naturally avert their attention from uncomfortable or painful stimuli like a health crisis, Harris adds. And now, with notifications coming at us from all sides, “It’s never been easier to have an excuse to attenuate or leave an uncomfortable stimulus.”

By fragmenting my attention and dangling before it the possibility of something newer and happier, Gmail’s design had exploited my innate psychological vulnerabilities and had made me more likely to turn away from my sick friend’s post, degrading my moral attention.

The problem isn’t just Gmail. Silicon Valley designers have studied a whole suite of “persuasive technology” tricks and used them in everything from Amazon’s one-click shopping to Facebook’s News Feed to YouTube’s video recommender algorithm. Sometimes the goal of persuasive technology is to get us to spend money, as with Amazon. But often it’s just to keep us looking and scrolling and clicking on a platform for as long as possible. That’s because the platform makes its money not by selling something to us, but by selling us — that is, our attention — to advertisers.

Think of how Snapchat rewards you with badges when you’re on the app more, how Instagram sends you notifications to come check out the latest image, how Twitter purposely makes you wait a few seconds to see notifications, or how Facebook’s infinite scroll feature invites you to engage in just one ... more ... scroll.

These apps make a game out of relieving anxiety. They may be on to something.


A lot of these tricks can be traced back to BJ Fogg, a social scientist who in 1998 founded the Stanford Persuasive Technology Lab to teach budding entrepreneurs how to modify human behavior through tech. A lot of designers who went on to hold leadership positions at companies like Facebook, Instagram, and Google (including Harris) passed through Fogg’s famous classes. More recently, technologists have codified these lessons in books like Hooked by Nir Eyal, which offers instructions on how to make a product addictive.

The result of all this is what Harris calls “human downgrading”: A decade of evidence now suggests that digital tech is eroding our attention, which is eroding our moral attention, which is eroding our empathy.

In 2010, psychologists at the University of Michigan analyzed the findings of 72 studies of American college students’ empathy levels conducted over three decades. They discovered something startling: There had been a more than 40 percent drop in empathy among students. Most of that decline happened after 2000 — the decade that Facebook, Twitter, and YouTube took off — leading to the hypothesis that digital tech was largely to blame.

In 2014, a team of psychologists in California authored a study exploring technology’s impact from a different direction: They studied kids at a device-free outdoor camp. After five days without their phones, the kids were accurately reading people’s facial expressions and emotions much better than a control group of kids. Talking to one another face to face, it seemed, had enhanced their attentional and emotional capacities.

In a 2015 Pew Research Center survey, 89 percent of American respondents admitted that they whipped out their phone during their last social interaction. What’s more, 82 percent said it deteriorated the conversation and decreased the empathic connection they felt toward the other people they were with.

But what’s even more disconcerting is that our devices disconnect us even when we’re not using them. As the MIT sociologist Sherry Turkle, who researches technology’s adverse effects on social behavior, has noted: “Studies of conversation, both in the laboratory and in natural settings, show that when two people are talking, the mere presence of a phone on a table between them or in the periphery of their vision changes both what they talk about and the degree of connection they feel. People keep the conversation on topics where they won’t mind being interrupted. They don’t feel as invested in each other.”

We’re living in Simone Weil’s nightmare.

Digital tech doesn’t only erode our attention. It also divides and redirects our attention into separate information ecosystems, so that the news you see is different from, say, the news your grandmother sees. And that has profound effects on what each of us ends up viewing as morally salient.

To make this concrete, think about the recent US election. As former President Donald Trump racked up millions of votes, many liberals wondered incredulously how nearly half of the electorate could possibly vote for a man who had put kids in cages, enabled a pandemic that had killed many thousands of Americans, and so much more. How was all this not a dealbreaker?

“You look over at the other side and you say, ‘Oh, my god, how can they be so stupid? Aren’t they seeing the same information I’m seeing?’” Harris said. “And the answer is, they’re not.”

Trump voters saw a very different version of reality than others over the past four years. Their Facebook, Twitter, YouTube, and other accounts fed them countless stories about how the Democrats are “crooked,” “crazy,” or straight-up “Satanic” (see under: QAnon). These platforms helped ensure that a user who clicked on one such story would be led down a rabbit hole where they’d be met by more and more similar stories.

Say you could choose between two types of Facebook feeds: one that constantly gives you a more complex and more challenging view of reality, and one that constantly gives you more reasons why you’re right and the other side is wrong. Which would you prefer?

Most people would prefer the second feed (which technologists call an “affirmation feed”), making that option more successful for the company’s business model than the first (the “confronting feed”), Harris explained. Social media companies give users more of what they’ve already indicated they like, so as to keep their attention for longer. The longer they can keep users’ eyes glued to the platform, the more they get paid by their advertisers. That means the companies profit by putting each of us into our own ideological bubble.

Think about how this plays out when a platform has 2.7 billion users, as Facebook does. The business model shifts our collective attention onto certain stories to the exclusion of others. As a result, we become increasingly convinced that we’re good and the other side is evil. We become less empathetic for what the other side might have experienced.

In other words, by narrowing our attention, the business model also ends up narrowing our moral attention — our ability to see that there may be other perspectives that matter morally.

The consequences can be catastrophic.

Myanmar offers a tragic example. A few years ago, Facebook users there used the platform to incite violence against the Rohingya, a mostly Muslim minority group in the Buddhist-majority country. The memes, messages, and “news” that Facebook allowed to be posted and shared on its platform vilified the Rohingya, casting them as illegal immigrants who harmed local Buddhists. Thanks to the Facebook algorithm, these emotion-arousing posts were shared countless times, directing users’ attention to an ever narrower and darker view of the Rohingya. The platform, by its own admission, did not do enough to redirect users’ attention to sources that would call this view into question. Empathy dwindled; hate grew.

In 2017, thousands of Rohingya were killed, hundreds of villages were burned to the ground, and hundreds of thousands were forced to flee. It was, the United Nations said, “a textbook example of ethnic cleansing.”

Myanmar’s democracy was long known to be fragile, while the United States has been considered a democracy par excellence. But Obama wasn’t exaggerating when he said that democracy itself is at stake, including on American soil. The past few years have seen mounting concern over the way social media gives authoritarian politicians a leg up: By offering them a vast platform where they can demonize a minority group or other “threat,” social media enables them to fuel a population’s negative emotions — like anger and fear — so it will rally to them for protection.

“Negative emotions last longer, are stickier, and spread faster,” explained Harris. “So that’s why the negative tends to outcompete the positive” — unless social media companies take concerted action to stop the spread of hate speech or misinformation. But even when it came to the consequential 2020 US election, which they had ample time to prepare for, their action still came too little, too late, analysts noted. The way that attention, and by extension moral attention, was shaped online ended up breeding a tragic moral outcome offline: Five people died in the Capitol riot.

A woman stares into a large eyeball as the reflection of another man looks back at her from inside the eye.

Efi Chalikopoulou for Vox

People who point out the dangers of digital tech are often met with a couple of common critiques. The first one goes like this: It’s not the tech companies’ fault. It’s users’ responsibility to manage their own intake. We need to stop being so paternalistic!

This would be a fair critique if there were symmetrical power between users and tech companies. But as the documentary The Social Dilemma illustrates, the companies understand us better than we understand them — or ourselves. They’ve got supercomputers testing precisely which colors, sounds, and other design elements are best at exploiting our psychological weaknesses (many of which we’re not even conscious of) in the name of holding our attention. Compared to their artificial intelligence, we’re all children, Harris says in the documentary. And children need protection.

Another critique suggests: Technology may have caused some problems — but it can also fix them. Why don’t we build tech that enhances moral attention?

“Thus far, much of the intervention in the digital sphere to enhance that has not worked out so well,” says Tenzin Priyadarshi, the director of the Dalai Lama Center for Ethics and Transformative Values at MIT.

It’s not for lack of trying. Priyadarshi and designers affiliated with the center have tried creating an app, 20 Day Stranger, that gives continuous updates on what another person is doing and feeling. You get to know where they are, but never find out who they are. The idea is that this anonymous yet intimate connection might make you more curious or empathetic toward the strangers you pass every day.

They also designed an app called Mitra. Inspired by Buddhist notions of a “virtuous friend” (kalyāṇa-mitra), it prompts you to identify your core values and track how much you acted in line with them each day. The goal is to heighten your self-awareness, transforming your mind into “a better friend and ally.”

I tried out this app, choosing family, kindness, and creativity as the three values I wanted to track. For a few days, it worked great. Being primed with a reminder that I value family gave me the extra nudge I needed to call my grandmother more often. But despite my initial excitement, I soon forgot all about the app. It didn’t send me push notifications reminding me to log in each day. It didn’t congratulate me when I achieved a streak of several consecutive days. It didn’t “gamify” my successes by rewarding me with points, badges, stickers, or animal gifs — standard fare in behavior modification apps these days.

I hated to admit that the absence of these tricks led me to abandon the app. But when I confessed this to McGuire, the University of North Carolina Wilmington professor, she told me her students reacted the same way. In 2019, she conducted a formal study on students who were asked to use Mitra. She found that although the app increased their moral attention to some extent, none of them said they’d continue using it beyond the study.

“They’ve become so accustomed to apps manipulating their attention and enticing them in certain ways that when they use apps that are intentionally designed not to do that, they find them boring,” McGuire said.

Priyadarshi told me he now believes that the “lack of addictive features” is part of why new social networks meant as more ethical alternatives to Facebook and Twitter — like Ello, Diaspora, or App.net — never manage to peel very many people off the big platforms.

So he’s working to design tech that enhances people’s moral attention on the platforms where they already spend time. Inspired by pop-up ads on browsers, he wants users to be able to integrate a plug-in that periodically peppers their feeds with good behavioral nudges, like, “Have you said a kind word to a colleague today?” or, “Did you call someone who’s elderly or sick?”

Sounds nice, but implicit in this is a surrender to a depressing fact: Companies such as Facebook have found a winning strategy for monopolizing our attention. Technologists can’t convert people away unless they’re willing to use the same harmful tricks as Facebook, which some thinkers feel defeats the purpose.

That brings up a fundamental question. Since hooking our attention manipulatively is part of what makes Facebook so successful, if we’re asking it to hook our attention less, does that require it to give up some of its profit?

“Yes, they very much would have to,” Harris said. “This is where it gets uncomfortable, because we realize that our whole economy is entangled with this. More time on these platforms equals more money, so if the healthy thing for society was less use of Facebook and a very different kind of Facebook, that’s not in line with the business model and they’re not going to be for it.”

Indeed, they are not for it. Facebook ran experiments in 2020 to see if posts deemed “bad for the world” — like political misinformation — could be demoted in the News Feed. They could, but at a cost: The number of times people opened Facebook decreased. The company abandoned the approach.

So, what can we do? We have two main options: regulation and self-regulation. We need both.

On a societal level, we have to start by recognizing that Big Tech is probably not going to change unless the law forces it to, or it becomes too costly (financially or reputationally) not to change.

So one thing we can do as citizens is demand tech reform, putting public pressure on tech leaders and calling them out if they fail to respond. Meanwhile, tech policy experts can push for new regulations. These regulations will have to change Big Tech’s incentives by punishing unwanted behavior — for example, by forcing platforms to pay for the harms they inflict on society — and rewarding humane behavior. Changed incentives would increase the chances that if up-and-coming technologists design non-manipulative tech, and investors move funding toward them, their better technologies can actually take off in the marketplace.

Regulatory changes are already in the offing: Just look at the recent antitrust charges against Google in the US, and President Joe Biden’s decisions to appoint Big Tech critic Lina Khan as chair of the Federal Trade Commission and to sign a sweeping executive order taking aim at anti-competitive practices in tech.

As the historian Tim Wu has chronicled in his book The Attention Merchants, we’ve got reason to be hopeful about a regulatory approach: In the past, when people felt a new invention was getting particularly distracting, they launched countermovements that successfully curtailed it. When colorful lithographic posters came on the scene in 19th-century France, suddenly filling the urban environment, Parisians grew disgusted with the ads. They enacted laws to limit where posters can go. Those regulations are still in place today.

Changing the regulatory landscape is crucial because the onus cannot be all on the individual to resist machinery designed to be incredibly irresistible. However, we can’t just wait for the laws to save us. Priyadarshi said digital tech moves too fast for that. “By the time policymakers and lawmakers come up with mechanisms to regulate, technology has gone 10 years ahead,” he told me. “They’re always playing catch-up.”

So even as we seek regulation of Big Tech, we individuals need to learn to self-regulate — to train our attention as best we can.

That’s the upshot of Jenny Odell’s book How to Do Nothing. It’s not an anti-technology screed urging us to simply flee Facebook and Twitter. Instead, she urges us to try “resistance-in-place.”

“A real withdrawal of attention happens first and foremost in the mind,” she writes. “What is needed, then, is not a ‘once-and-for-all’ type of quitting but ongoing training: the ability not just to withdraw attention, but to invest it somewhere else, to enlarge and proliferate it, to improve its acuity.”

Odell describes how she’s trained her attention by studying nature, especially birds and plants. There are many other ways to do it, from meditating (as the Buddhists recommend) to reading literature (as Martha Nussbaum recommends).

As for me, I’ve been doing all three. In the year since my sick friend’s Facebook post, I’ve become more intentional about birding, meditating, and reading fiction in order to train my attention. I am building attentional muscles in the hope that, next time someone needs me, I will be there for them, fully present, rapt.

Reporting for this article was supported by Public Theologies of Technology and Presence, a journalism and research initiative based at the Institute of Buddhist Studies and funded by the Henry Luce Foundation.

Sigal Samuel is a Senior Reporter for Vox’s Future Perfect and co-Host of the Future Perfect podcast. She writes about artificial intelligence, neuroscience, climate change, and the intersection of technology with ethics and religion.


Source: vox.com

To view all comments. You need to Sign-In first.

Leave a Comment

To post a new comment. You need to Sign-In first.