READ MORE: Algorithms, Lies, and Social Media (OpenMind)
The algorithms that underpin social media need to be revised or liberal democracy itself will die. That’s the message from a pair of cognitive scientists who have studied the effect of social media on society and believe that much of our current malaise stems from the deliberate warping of the news agenda to suit corporate greed.
Stephan Lewandowsky, a cognitive scientist at the University of Bristol in the UK, and Anastasia Kozyreva, a philosopher at the Max Planck Institute for Human Development in Berlin, aren’t the first to point out that we have sold our souls (our personal data in exchange for cat videos and online networking) to the likes of Google and Facebook, in return for which those platforms sell our data to advertisers, which then surfaces in our feeds on their platforms.
It’s what Harvard social psychologist Shoshana Zuboff has called an assault on human autonomy in her book The Age of Surveillance Capitalism, which explains how platforms are incentivized to align their interests with advertisers, often at the expense of users’ interests or even their well-being.
Lewandowsky and Kozyreva agree that the algorithms that govern the information we receive are now doing fundamental damage to our collective ability to make decisions in our own interests.
They urge a four-point plan of action in response.
“Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online ‘attention economy’ that has misaligned the interests of platforms and consumers,” they argue in OpenMind magazine. “Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century.”
Get Real. Social Media is Not The Root of All Evil.
By Adrian Pennington
Tech backlash against social media has been steadily growing since 2017 (the year Trump took office), but as ever the truth may be more nuanced. Are the algorithms that force us into echo chambers on various social media platforms to blame for political extremism?
A number of academics and researchers are questioning the evidence.
“There’s a visible disconnect between the empirical evidence and overwrought declarations of doom amplified by the media,” writes Nirit Weiss-Blatt, who pulls the arguments together at The Daily Beast.
Taken as a whole, many of the prevailing narratives about social media (such as filter bubbles, echo chambers, fake news and algorithmic radicalization) are simply poorly founded.
“Correlational research cannot decipher which direction the effects of interest are going,” says Weiss-Blatt.
Here’s a selection of findings that might prompt reassessment over the demonization of social media and the tech giants that run them.
While media and activists have obsessed over misinformation and disinformation on social media since the 2016 US presidential election, researchers from Harvard University analyzed both mainstream and social media coverage of the election and concluded that: “The wave of attention to fake news is grounded in a real phenomenon, but at least in the 2016 election, it seems to have played a relatively small role in the overall scheme of things.”
If we’re exposed to more views, it raises a different issue. According to Professor Michael Bang Petersen, a political scientist at Aarhus University, quoted in The Daily Beast, that’s where a lot of the felt hostility of social media comes from — not because the sites make us behave differently, but because they are exposing us to a lot of things we wouldn’t normally encounter in our everyday lives.
Duke University sociology and public policy professor Chris Bail curated a study that found negative influence of social media on democracy, and also other studies that conclude it’s either not the case or is inconclusive.
Bail has pointed out that the number of people exposed to fake news is pretty low — only two percent of Twitter users routinely see fake news. More importantly, they don’t believe what they read when they do see it.
Professor Haidt, a social psychologist at New York University, argues, “Social media may not be the primary cause of polarization, but it is an important cause” — that repeatedly overplays the role of social media in society.
Then there’s the concern over “rabbit holes” — where algorithms supposedly take normal everyday people and turn them into radical extremists.
Professor Brendan Nyhan, a political scientist at Dartmouth College, found that aside from “some anecdotal evidence to suggest that this does happen,” the more frequent and bigger problem is that people “deliberately seek out vile content” via subscriptions, not via recommendation algorithms. That means they don’t fall into the radicalization rabbit hole, they choose it.
This should challenge the idea that it is social media that is perpetuating the Big Lie about conspiracy theories when in fact such a blinkered focus “prevents us from properly confronting the deeper causes of division in both politics and society,” says Weiss-Blatt.
It’s not that social media doesn’t have an effect, but it’s usually on already radicalized individuals who find extremist content that reinforces their predispositions.
She argues that journalists should “beware of overconfident techies bragging about their innovation capabilities and overconfident critics accusing that innovation of atrocities.”
Now she urges readers to adopt the same healthy skepticism.
There is a positive aspect of Techlash pressure. It makes the big tech companies think hard about placing safeguards in advance, ask the “what could possibly go wrong?” questions, and deploy resources to battle potential harms. That’s undoubtedly a good thing.
However, when lawmakers promote grandstanding bills to “fix” social media, the devil is always in the details, and we ought to think a lot harder about their formation, Weiss-Blatt maintains. There are real costs when regulators waste their time on overly simplistic solutions based on inconclusive evidence.
One legislative battle worth fighting would be to remove the shroud around tech companies’ infamous secrecy. When platforms are perceived as black boxes, and relying more and more on recommendation algorithms, it’s easier to fear losing their business. Greater transparency is crucial here.
“Independent researchers ought to be provided with more data from the big tech companies,” she says. “That way, we could expand the discussion and give more room for a whole-of-society approach where experts offer informed arguments.”
In short, given the continuing and inexact research on the subject, it’s difficult to say anything on the topic with absolute certainty “but we continue to see headlines with absolute certainties as if the threat posed by social media is an undeniable fact,” says Weiss-Blatt.
“It’s not. And while scientists are still raising question marks, the media continues to raise exclamation marks.”
READ MORE: Don’t Be So Certain That Social Media Is Undermining Democracy (The Daily Beast)
Contrast that with the Arab Spring of 2010, when countries like Tunisia and Egypt experienced a revolt against autocratic regimes. At the time, the internet and social media were seen (in the liberal west) as a force for good by undercutting stage propaganda with real time reportage and an ability to connect and coordinate a groundswell of civil action. Since then, states of co-opted social media into their armory by using the same platforms to disseminate misinformation or to simply call out truth as fake news. This has happened in authoritarian countries like Russia and in Europe and the US where leading social media platforms have aided and abetted the spread of lies because it is in their financial interest to do so.
For example, the writers say YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. They point to a study, “YouTube Regrets,” which confirms that YouTube “not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.”
READ MORE: YouTube Regrets (Mozilla)
In the same vein, our attention online is more effectively captured by news that is either predominantly negative or awe inspiring. Misinformation is particularly likely to provoke outrage, and fake news headlines are designed to be more negative than real news headlines.
READ MORE: Cross-national evidence of a negativity bias in psychophysiological reactions to news (PNAS)
READ MORE: Viral News on Social Media (ResearchGate)
READ MORE: Investigating the emotional appeal of fake news using artificial intelligence and human contributions (Emerald Insight)
“In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness.”
Yet Facebook and other social media platforms have responded. Whereas only a few years ago they were arguing that they had no political role and shouldn’t filter what was posted on their platforms, now misinformation about controversial topics (read by some as conspiracy theories) like Covid-19, the war in Ukraine and climate change are being censored by the networks themselves.
“Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online ‘attention economy’ that has misaligned the interests of platforms and consumers. Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century.”
— Stephan Lewandowsky & Anastasia Kozyreva
This too is problematic according to Lewandowsky and Kozyreva. “This kind of content moderation inevitably means that human decision makers are weighing values. It requires balancing a defense of free speech and individual rights with safeguarding other interests of society, something social media companies have neither the mandate nor the competence to achieve.”
Their main remedy is to ensure, via law if necessary, that we are all better educated about the extent to which our knowledge of the world is being warped.
Even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew Research paper published in 2019 found that 74% of Americans did not know that Facebook maintained data about their interests and traits.
SOCIAL MEDIA, WEB3, AND HUMANITY’S DIGITAL FUTURE:
Technology and societal trends are changing the internet. Concerns over data privacy, misinformation and content moderation are happening in tandem with excitement about Web3 and blockchain possibilities. Learn more about the tech and trends driving humanity’s digital future with these hand-curated articles from the NAB Amplify archives:
- The TikTok-ing of Western Civilization
- Web3 and the Battle for the Soul of the Internet
- Our Collective (and Codependent) Relationship with Data
- Want to Fix Social Media? Stop Listening to the Bots and Algos
- Social Media Is a Disaster for Democracy, But Who’s Going to Change It?
“They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation.”
READ MORE: Facebook Algorithms and Personal Data (Pew Research Center)
So, to shift this balance of power in favor of objective truth they argue for a redesign or resetting of not just of the algorithms themselves but of public knowledge of what platforms do and what they know.
- There must be greater transparency and more individual control of personal data.
- Platforms must signal the quality of the information in a newsfeed so users can assess the risk of accessing it. I.e., Does the material come from a trustworthy place? Who shared this content previously?
- The public should be alerted when political speech circulating on social media is part of an ad campaign.
- The public must know exactly how algorithms curate and rank information and then be given the opportunity to shape their own online environment. I.e., independent agencies must be able to audit platform data and identify measures to remedy the spigot of misinformation.
There are laws in progress in the US and Europe intended to tighten data privacy but writers suggest that there is considerable public and political skepticism about regulations in general and about governments stepping in to regulate social media content in particular.
The best solution, they say, lies in shifting control of social media “from unaccountable corporations to democratic agencies that operate openly, under public oversight.”
But how likely is that?
“There’s no shortage of proposals for how this might work. For example, complaints from the public could be investigated. Settings could preserve user privacy instead of waiving it as the default.”
Another idea is to develop a digital literacy tool kit called Boosting that aims to increase users’ awareness and competence in navigating the challenges of online environments.
The problem as I see it is: outside of academia, outside of liberal intelligentsia and outside of intellectual paternalism — who actually cares enough to ditch “The Circle” and lose their crutch of connection to the outside world?