Bot accounts have been used to create discord around controversial topics in many countries.

Bots frequently amplify misinformation and conspiracy theories shared by real people, giving a megaphone to what might otherwise be a lone misguided voice. They hijack conversations on controversial issues to derail or inflame the discussion. For example, bots have posed as Black Lives Matter activists and shared divisive posts designed to stoke racial tensions. When real people try to make their voices heard online, they do so within a landscape that’s increasingly poisoned and polarized by bots.

I have spent much of my career developing artificial intelligence to identify online bots. My colleagues and I are in a computational arms race: As the tools we build to track down fake accounts improve, so do the bots. As important as our work is, using software tools to find individual bots won’t eliminate the problem. Social media platforms must act to root out bots on a systemic level.

What makes bots increasingly dangerous is their sophistication and scale. Artificial intelligence has become so good at mimicking human speech that it’s hard for the average user to tell what’s real and what’s fake. Last fall, an account using the advanced GPT-3 language processing algorithm was released on Reddit. The conversations it had were so human-like that it took more than a week before users realized they were interacting with a bot. You can see for yourself just how sophisticated this AI is on sites like Talk to Transformer.

Bots also have tremendous reach. While the average person can share misinformation with dozens or perhaps hundreds of friends on social media, an army of bots can spread the same content to millions in a matter of hours through a steady drumbeat of posts. A 2018 study found that just 6 percent of Twitter accounts, all of them suspected bots, were responsible for spreading 31 percent of misinformation around the 2016 election. In many cases, the false information began trending in less than 10 seconds.

Simply removing bot accounts from popular platforms isn’t enough. Facebook deleted nearly nine billion bogus accounts in 2018 and 2019, but the company still estimates that at least 5 percent of its users are fake. Organized misinformation campaigns have also been known to hack real accounts and convert them to bots, taking advantage of these accounts’ existing networks and credibility.

Instead of playing whack-a-mole with individual accounts, social media platforms need to zoom out and attack the bots en masse. As AI becomes more sophisticated at mimicking humans, the best way to spot bot activity is by looking at the context of a post. Has a hashtag risen out of nowhere, driven by an interlinked network of suspicious accounts? Do a group of users post about a single topic ad nauseam, echo similar talking points, or repeatedly divert unrelated conversations to a particular topic?

When the algorithms that decide what you and I see are a black box, it’s difficult to stop misinformation from spreading and to gauge the authenticity of what we’re exposed to online. Only the companies themselves have the necessary back-end data, such as accounts’ IP addresses and posting patterns, to provide context about why a specific hashtag is trending or where and how a piece of viral misinformation started.

Once bot campaigns are identified, social media companies can take several steps to hinder them while respecting the free speech of human users. They could require a simple CAPTCHA test before publishing any post containing a hashtag that is largely being spread by bot accounts. They can give users more context about the information they encounter, such as the country where a viral hashtag originated or patterns in the prior posting history of other accounts. They could even experiment with computational techniques that generate a summary of each user’s activity, pulling the curtain back on accounts that post relentlessly about a single topic or tend toward inflammatory content. There are also changes companies can make behind the scenes, such as tweaking their algorithms to de-prioritize posts from bot-driven campaigns in users’ news feeds. Facebook did this temporarily in the aftermath of the 2020 election, and traffic to more authoritative news sources increased as a result.

Thus far, social media companies have been reluctant to fight bots as aggressively as possible. Twitter recently began labeling state-sponsored media accounts, such as Russia Today, which are often used to post content that is then amplified by bots. However, this small step came only after prolonged pressure from users and the US government. The reality is that platforms have ample incentives to continue promoting divisive content and misinformation as long as it engages their audience. All activity, whether authentic or not, is good for their bottom lines.

Ultimately, government regulations may be necessary to make platforms safeguard the integrity of online discourse. However, regulation does not mean censoring content, an approach that has backfired in India and other countries. Instead, governments should pursue rules that encourage transparency, such as requiring platforms to reveal data about the geographic origin or posting behavior of bot-associated accounts, hashtags, or viral content. Regulations could also require platforms to explain their decisions to block or remove content in the event that they do so. Such transparency can offer users important context for what they see online without limiting free speech. Ultimately, our long-term goal should be to train the population to think more critically about their information sources, and transparency gives people the information to make these judgments.

Reestablishing space for productive public discussions around politics, climate change, public health, and racial justice requires tougher tactics against the Internet’s bot infestation. If social media companies take responsibility for their platforms and stop letting bots drown out and derail the conversations of real people, then can we get back to the founding principle of the Internet: the authentic and free exchange of ideas.

Victor Benjamin is an assistant professor of information systems at the W.P. Carey School of Business at Arizona State University.

Bot accounts have been used to create discord around controversial topics in many countries.
Show caption‘The broader Russian strategy is pretty clearly about destabilizing the country by amplifying existing divisions,’ says a former state department adviser. Photograph: Guardian Design Team

Russian trolls and bots focused on controversial topics in an effort to stoke political division on an enormous scale – and it hasn’t stopped, experts say

For the past year, the world has reeled over escalating reports of how Russia “hacked” the 2016 US presidential election, by stealing emails from Democrats, attacking voter registration lists and voting machines and running a social media shell game.

Such is the focus on Russian meddling that congressional investigators are increasingly aggressive in asking the big tech companies to account for how their platforms became the staging grounds for an attack on American democracy. Early next month that scrutiny will intensify, with executives from Facebook, Google and Twitter formally invited to appear before the House intelligence committee on Capitol Hill in Washington.

What has now been made clear is that Russian trolls and automated bots not only promoted explicitly pro-Donald Trump messaging, but also used social media to sow social divisions in America by stoking disagreement and division around a plethora of controversial topics such as immigration and Islamophobia.

Facebook must 'follow the money' to uncover extent of Russian meddling | Diana Pilipenko

And, even more pertinently, it is clear that these interventions are continuing as Russian agents stoke division around such recent topics as white supremacist marches and NFL players taking a knee to protest police violence.

The overarching goal, during the election and now, analysts say, is to expand and exploit divisions, attacking the American social fabric where it is most vulnerable, along lines of race, gender, class and creed.

“The broader Russian strategy is pretty clearly about destabilizing the country by focusing on and amplifying existing divisions, rather than supporting any one political party,” said Jonathon Morgan, a former state department adviser on digital responses to terrorism whose company, New Knowledge, analyzes the manipulation of public discourse.

“I think it absolutely continues.”

In the last month – mostly through vigorous reporting and academic research – we have also learned that the impact of Russia’s Facebook infiltration was far more widespread than Mark Zuckerberg claimed when Barack Obama pulled him aside at a conference in Peru last November to inform the young titan he had a problem on his hands. As more evidence emerges revealing the extent of the Russian web invasion, it is clear that its footprint is far larger than the tech giants have ever conceded.

On Facebook alone, Russia-linked imposters had hundreds of millions of interactions with potential voters who believed they were interacting with fellow Americans, according to an estimate by Jonathan Albright of Columbia University’s Tow Center for Digital Journalism, who broke the story wide open with the publication of a trove of searchable data earlier this month.

Bot accounts have been used to create discord around controversial topics in many countries.
Facebook’s chief operating officer, Sheryl Sandberg, and its vice-president of global communications and public policy, Elliot Schrage, on Capitol Hill. Photograph: James Lawler Duggan/Reuters

Those interactions may have reinforced the voters’ political views or helped to mold them, thanks to the imposter accounts’ techniques of echoing shrill views and presenting seemingly sympathetic views with counterintuitive, politically leading twists.

During the election, for example, an imposter Facebook page called “Being Patriotic” used hot-button words such as “illegal”, “country” and “American” and phrases such as “illegal alien”, ″Sharia law” and “welfare state”, according to an analysis of Albright’s data by the Associated Press. The page racked up at least 4.4m interactions, peaking between mid-2016 and early 2017.

The urgency of the threat has not been matched by the response of the tech companies, critics say, as they have been slow to acknowledge the problem.

Things happened on our platform that shouldn’t have happened

A reference to Russia in an April Facebook draft report about election influence was inexplicably cut, the Wall Street Journal reported last week. Only last month did Facebook acknowledge that Russia-linked pages had bought thousands of ads on the platform.

According to the Washington Post this week, Google has detected similar ad-buying activity, of unknown scope, on YouTube, Gmail and its search engine – though the company has made nothing public. The Russian imposters have also been detected on Instagram, Twitter and even Pokémon Go.

Facebook did not reply to repeated requests for comment. But the gravity of the situation, whose dimensions are still unknown, was underscored on Thursday in an interview that Facebook’s chief operating officer, Sheryl Sandberg, gave to Axios.

“Things happened on our platform that shouldn’t have happened,” Sandberg said, adding that the company owed the American public “not just an apology, but determination” to address the problem.

The attackers appear to have a handy, if unwitting, ally in Trump, who is generous in spreading bile online. In certain recent cases, social media accounts linked with Russian influence operations appear to have taken cues directly and immediately from the @realdonaldtrump Twitter account, according to analysis by the Washington-based Alliance for Securing Democracy, which maintains a daily tracker of the networks in question.

After Trump criticized the “poor leadership ability” of Carmen Yulín Cruz, mayor of San Juan, Puerto Rico, on 30 September, for example, Russian-linked Twitter accounts disseminated articles with “the primary theme of either discrediting” Cruz “or accusing the media of spreading ‘fake news’”, the alliance said.

The week before that, the clandestine network poured accelerant on the fight picked by Trump with the mostly African American players in the NFL who kneeled during the national anthem in protest of police violence. Instead of simply echoing the president’s demand for a boycott unless the players stood, however, the Russian accounts took both sides of the issue, spreading both the hashtags #TakeaKnee and #BoycottNFL.

There’s some really intricate maneuvering going on

“The ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum – touching on topics from LGBT matters to race issues to immigration to gun rights,” said Alex Stamos, the chief security officer at Facebook, in the first public statement the company made on the matter.

Albright’s data encompasses six Facebook pages previously linked by media investigations to Russia. The pages were not clumsily partisan, pro-Trump or anti-Hillary Clinton sites. Instead they worked by crafting identities around hot-button issues in US politics, and by wielding a crafty sympathy, in some cases, with causes seen as antithetical to Trump such as LGBTQ pride and opposing police violence.

“There’s some really intricate maneuvering going on,” said Albright. “It’s definitely set up not to directly force issues but to identify people that fall into the wedge categories that can be used to influence others or to push conversations elsewhere.”

The imposter pages included Secured Borders, an anti-immigrant account that grew to 133,000 followers; Texas Rebels, which parroted Lone Star state pride while criticizing Clinton; Being Patriotic, which attacked refugees while defending the Confederate battle flag; LGBT United, which subtly espoused “traditional” family values; and Blacktivists, a faux satellite of the Black Lives Matter movement.

“It seems Americans should be wary of police brutality more than of Isis terrorists,” read a typical Blacktivists post, which was liked thousands of times.

Bot accounts have been used to create discord around controversial topics in many countries.
Donald Trump shakes hands with San Juan’s mayor, Carmen Yulín Cruz. After he criticized Cruz on Twitter, Russian-linked Twitter accounts disseminated articles focused on discrediting her, analysts say. Photograph: Evan Vucci/AP

“Why there’s so many privileges and benefits for refugee kids, but American kids forced to grow up in poverty?” asked one September 2016 post by Secured Borders. “That’s absolutely unacceptable!!”

“More than 300,000 vets died awaiting care,” read a post on Being Patriotic. “Do liberals still think it is better to accept thousands Syrian refugees than to help our veterans?”

Owners of the imposter pages could post controversial – or seemingly sympathetic – messages or event announcements, and then, by inviting and observing interactions such as “likes”, comments or merely views, gather information about genuine American Facebook users, and potential voters. Those voters could then be targeted with political content that appealed to some of their most closely held sympathies.

So, Mark Zuckerberg wants to repent for Facebook's sins? He can start here | Ellen P Goodman

The strategy was highly effective, in terms of penetration. Albright’s research showed that the six Russia-linked Facebook pages had generated more than 18m interactions – a conservative estimate, he said – before Facebook shut them down.

But those were just six accounts among “dozens and dozens and dozens of pages” that bore obvious markings shared by other accounts linked with Russia, said Albright.

“Those 18m interactions are only for those six pages, just on Facebook” and not Instagram or other social media, Albright said. “So what are we talking about here, overall? We’re talking about hundreds of millions of interactions.”

The accounts and others have since been removed by Facebook. But “I don’t think they’ve even begun to find” all the imposter accounts, Albright said, owing to the imposters’ verisimilitude.

Morgan, the former state department adviser, called the response so far by the big tech companies to the Russian presence on their platforms a “misfire”.

“What I see is Facebook and Twitter and Google trying to define this problem narrowly as about political advertising, and I think that that misses the mark,” he said. “Because the next group of people that are going to be vulnerable is American industry, especially industry that’s foundational to how our society operates. So the energy industry and the financial industry – they can be manipulated just like our electoral process.

“I think a narrow focus on political advertising is ultimately going to miss the forest for the trees.”

Albright agreed that “there needs to be some kind of oversight”.

“It doesn’t fall completely on Facebook,” he said. “The scale at which this is happening is concerning enough that something needs to happen. We need to rethink a lot of this, because it’s definitely not working.”

Everyone in the know, from the bipartisan heads of the Senate intelligence committee on down, agrees with the researchers, that more pressure needs to be applied at every level, in the tech world and in Moscow, to figure out what happened and what is still happening.

Everyone, with one notable exception.

{{#ticker}}

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{#goalExceededMarkerPercentage}}{{/goalExceededMarkerPercentage}}

{{/ticker}}{{#paragraphs}}

{{.}}

{{/paragraphs}}{{highlightedText}}
{{#choiceCards}}{{/choiceCards}}

We will be in touch to remind you to contribute. Look out for a message in your inbox in . If you have any questions about contributing, please contact us.

Topics

  • US politics
  • Social media
  • Facebook
  • Social networking
  • Russia
  • features

{{#showContent}}

{{#description}}{{/description}}{{#content}}{{#headline}}

{{#isComment}}{{/isComment}}

{{#showWebPublicationDate}} {{webPublicationDate}}{{/showWebPublicationDate}}

{{headline}}

{{/headline}}{{/content}}

{{/showContent}}{{#showContent}}

{{#description}}{{/description}}{{#content}}{{#headline}}

{{#isComment}}{{/isComment}}

{{#showWebPublicationDate}} {{webPublicationDate}}{{/showWebPublicationDate}}

{{headline}}

{{/headline}}{{/content}}

{{/showContent}}{{#showContent}}

{{#description}}{{/description}}{{#content}}{{#headline}}

{{#isComment}}{{/isComment}}

{{#showWebPublicationDate}} {{webPublicationDate}}{{/showWebPublicationDate}}

{{headline}}

{{/headline}}{{/content}}

{{/showContent}}{{#showContent}}

{{#description}}{{/description}}{{#content}}{{#headline}}

{{#isComment}}{{/isComment}}

{{#showWebPublicationDate}} {{webPublicationDate}}{{/showWebPublicationDate}}

{{headline}}

{{/headline}}{{/content}}

{{/showContent}}