• News
  • Columns
  • Interviews
  • BW Communities
  • Events
  • BW TV
  • Subscribe to Print
  • Editorial Calendar 19-20
BW Businessworld

Tackling Problematic Content Online, At Scale

Disinformation campaigns are always created with a specific objective: to manipulate perceptions and sow discord in a way that benefits its proponents.

Photo Credit :


The last few weeks of news coverage have reminded us how easy it is to create, disseminate and proliferate misinformation in India. The ingredients are often the same: a hot-button topic, local or foreign bad actors, explicit or implicit collusion between these actors, narrative localisation and trending hashtags.

Tackling misinformation and problematic content at scale is difficult. Messages morph and evolve in ever new and imaginative ways. Catching such posts on open networks such as Facebook and Twitter has become easier, but less so on platforms’ private groups and closed networks such as WhatsApp, Signal and the dark web, as well as emerging local platforms such as Koo and ShareChat.

To begin to tackle the problem, it is important to distinguish between misinformation and disinformation. They can be different in source and require different remedial actions. Misinformation can mean false rumours or generally inaccurate and misleading information, shared often without malice or coordination. Disinformation such as propaganda or foreign interference is spread deliberately and for a specific purpose.

For example, during Prime Minister Modi’s visit to China in October 2019, our systems found evidence that over 12,000 tweets under the #GobackModi hashtag originated from Pakistan, with over a third of these accounts created on or after Sept. 30, suggesting that they were created solely for this purpose. A majority of these accounts were inauthentic (bots), and actively engaging with authentic tweets under the hashtag to boost them. 

Disinformation campaigns are always created with a specific objective: to manipulate perceptions and sow discord in a way that benefits its proponents. Their methods can vary: from creating new narratives, to amplifying existing ones, and targeting the relevant influencers who can best help to deliver the message to the desired audience.

For example, the theory that Covid-19 was a bioweapon released by China was popularised globally by the right-wing blog Zero Hedge but originated from a fringe Indian news site with links to Russian disinformation networks. Its virality illustrates the difficulty of monitoring the spread of disinformation, and the need for expert oversight to keep track of a volatile information ecosystem.

The widespread dissemination of false narratives can lead to a breakdown in trust between citizens, governments and media. In turn, this can pose a direct threat to democratic processes, national security, public safety and public health. 

The 5G coronavirus theory, which said that 5G masts caused Covid-19, dwarfed the bioweapon theory in the UK. The 5G theory spiked on the same day that UK lockdown measures were announced. Arsonists burnt down 80 mobile phone masts and threatened telecoms engineers with violence.

When conspiracy narratives adapt to survive in different environments, we call this narrative localization. Conspiracy theories tend to localize their narratives by jettisoning irrelevant elements and adding in more appropriate ones. This allows them to thrive in a new ecosystem, achieving better resonance with that specific audience. There are two main forms of narrative localization: demographic and geographic.

Demographic localization takes advantage of existing emotive subjects. We found examples of demographic localization in our investigation into Spanish-language disinformation during the 2020 US presidential election, where existing anti-abortion or anti-Communist concerns were emphasized to increase the appeal of certain disinformation narratives. 

A recent example of geographic localization could be seen in Indian lawyer Vibhor Anand’s adaptation of a US-based QAnon conspiracy to accuse Indian celebrities like Salman Khan of being involved in child trafficking.

Given that digital literacy remains incredibly low in the rural areas of India, and that Facebook has been one of the biggest propellants of rural India’s internet usage, it is more likely than in Western countries that mis and disinformation campaigns will exert systematic and disturbing influence on real world activities.

India’s hugely diverse demographics, the sheer number of different cultural groups and languages, its polarised politics and sensational media leaves it vulnerable.

During the General Elections 2019, we analysed 944,486 news articles, of which 14.1% were unreliable and 25% fake. In a unique experiment in Maharashtra, we received 11,560 unique requests for checking the veracity of news items in the 50 days from 4th May 2019. We found users were most likely to expose content if they were confident of receiving a response within 30 minutes, with an outright debunk, verification or statement acknowledging the nuances and complexity of a particular message.

The immediacy of response helped in exposing novel content; only 793 of the requests were related to third party fact-checks indicating this methodology uncovers new information previously hidden from fact-checkers and other activists. The average active user was able to spread the verifications of content to 10-30 WhatsApp users.

If these shallow networks were scaled to 1-3 million engaged users in India, this approach may be able to reach all users affected by mis/disinformation on WhatsApp in India. It is this kind of early intervention, one that is rooted in understanding of how people share and trust information they receive, that should be explored further.

We have all seen the corrosive impact of the information crisis on society up close, all around the world. Tackling problematic content online can only be addressed through close partnership working, including between governments, platforms, media houses, researchers, civil society and companies like ours. We need better information and data sharing, new and innovative approaches to threat intelligence and fact checking that deliver scaleable and effective solutions, and the proliferation of robust digital literacy and education programmes. Logically is committed to playing its part in this mission, and looks forward to continuing this important work in India and around the world. 

Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.

Sagar Kaul

Vice President of India Operations, Logically

More From The Author >>