Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Monetising hate

THE unverified news of a young female student being sexually assaulted at a private college in Lahore has shocked the country. The news, that was made viral via social media and messaging apps, triggered protests, with demonstrations held by college students quickly turning violent across Punjab, and leading to the death of a security guard. The government responded with its usual high-handedness: the law enforcers used excessive physical force and disproportionate legal measures, such as the registration of FIRs, arrest of young students and the imposition of Section 144.
Media outlets have focused on the actions of the state and the administration at the educational institution at the centre of the alleged incident, and to some extent on the content creators ostensibly behind the spread of disinformation. While we must keep an eye on such actors, what is troubling is that the role of social media platforms in amplifying disinformation — in this case, through apparent inaction — has not been probed adequately.
The rapid speed at which social media content is shared shows that disinformation researchers, fact-checkers and media outlets are often unable to debunk “the evolving dynamics of disinformation” in time. Significant public damage has already been done by the time a more accurate picture emerges. Regarding the alleged rape case in Lahore, news outlets and fact-checkers scrambled to provide context, even as the government and other institutions claimed that the attack never happened — a claim that was met with heavy scepticism.
The discourse on current events, locally or internationally, is frequently shaped by accounts on YouTube, TikTok, Instagram, X, Facebook, etc, whose content is shared more than that of the BBC or CNN. Rather than being nuanced and level-headed, however, social media personalities or outlets “routinely promote sensationalist, extreme or false content, rapidly spreading it to vast audiences”, according to the UK-based Institute for Strategic Dialogue.
The protests in response to the alleged rape in Lahore bring to mind how the UK, too, fell victim to disinformation this summer. The 2024 riots in the UK were instructive: not only did social media fan the flames of public anger, but a Lahore-based web presence, ‘Channel3Now’, was found responsible for promoting dangerous disinformation — false news that it was monetising on social media.
In Pakistan, to what extent did platforms contribute to the spread of the rumour of a rape in Lahore and the ensuing offline violence? What role did the recommendation algorithms of social media platforms play? And to what extent was the rumour spread by accounts who stood to benefit financially from platform ad revenue-sharing programmes? One can note that in Europe, in order to hold Big Tech accountable, active steps are being taken or proposed. The European Union’s Digital Services Act (DSA), 2022, for example, holds social media companies themselves responsible for disinformation, with fines payable of up to six per cent of a platform’s global revenue.
In the event of harmful yet legal content shared online, platforms may find themselves being blocked by the government if they are unable to remove harmful, or false material. In the Global South, platforms have been slower to respond or have not responded at all compared to how quickly they might have acted had they been facing the authorities in the US, UK, or EU. Recent events have made this disparity in accountability even more evident, particularly when it comes to addressing the monetisation of disinformation on social media in countries like Pak­istan and the larger Global South.
It’s time that so­­cial media platforms paid the same attention to countries in the Global South in the matter of regulating the monetisation of content and ensuring that the issue of harmful content is tackled effectively and that repeat offenders are removed from social media platforms permanently. Such a proactive approach could prevent governments from imposing more stringent and potentially problematic laws or banning platforms altogether.
CSOs from the Global Majority have a role to play by closely monitoring the enforcement of the DSA in Europe. By understanding how platforms comply with the DSA, these CSOs can use those insights to push for similar measures in their own regions, targeting the harm caused by disinformation and damaging content. This effort includes exposing and addressing the monetisation of hate by content creators who present themselves as journalists, using this label to spread hate and disinformation while profiting from content they misleadingly classify as ‘news’ or ‘journalistic content’.
The writer is executive director, Digital Rights Foundation.
Adnan Chaudhri is senior research associate, Digital Rights Foundation.
Published in Dawn, October 22th, 2024

en_USEnglish