A recent research report by the corporate accountability group Ekō, in collaboration with India Civil Watch International, has revealed alarming shortcomings in Meta's ability to detect and block
The investigation, spanning the third and fourth phases of India's seven-phase election, targeted 189 contentious constituencies during the election's 48-hour "silence period." This period mandates a halt on all election-related advertising, in order to grant citizens some breathing room to make a considered voting decision. However, researchers found that Meta failed to enforce these restrictions, allowing the dissemination of harmful political advertising, mainly by the ruling Bharatiya Janata Party (
Between May 8-13, the study organisations received publishing approvals for 14 self-submitted highly inflammatory advertisements, meaning that Meta’s auto-check systems were unable to pick up on the posts’ disturbing content. These ads contained violent rhetoric anywhere from targeting Muslim minorities, spreading disinformation by exploiting communal conspiracy theories, and inciting violence through Hindu supremacist narratives. One ad even mimicked a recent AI-doctored video of Home Minister Amit Shah, which had recently gained enough notoriety to sanction arrests of some opposition party members.
The approved ads were placed in English, Hindi, Bengali, Gujarati and Kannada, and mimicked or exaggerated existing Hindu-supremacist propaganda, such as:
- Ads targeting BJP opposition parties with accusations of Muslim favouritism
- Dog whistles about Muslim “invaders” and calls for violence against
Muslims - Narratives claiming electronic voting machine tampering, calling for uprisings
- Hindu supremacist language and calls for burning Muslims
- A call for executing a lawmaker accused of Pakistani allegiance
- Disinformation about affirmative action policies
This news comes despite Meta’s earlier announcements to detect and label AI-generated content to protect the electoral process. Of the total 22 provocative ads that had been submitted to the platform, 14 were reportedly approved within 24 hours, despite them all breaching Meta’s own policies on hate speech, misinformation, and violence.
The research also highlighted vulnerabilities in Meta’s automated ad review system. When faced with regional roadblocks, researchers managed to simply set up dummy Facebook accounts from outside India to circumvent Meta's security measures and post these ads — including during the silence period. Even in the rare chance that these ads were flagged by the platform, slightly adjusting the content was enough to bypass Meta’s scrutiny, the researchers found. This reveals how easily bad actors can exploit Meta’s platform to spread disinformation and incite violence.
Civil society groups, both in India and internationally, have called on Meta to enforce stricter measures during elections, emphasising the need for transparency and accountability. As India continues its electoral process, the need for robust action from social media giants like Meta is more critical than ever to protect democratic integrity and public safety.
A comprehensive report of the investigation can be accessed here.