Images depicted Muslims as animals (e.g., snakes in skullcaps) and as criminal infiltrators, garnering millions of engagements. Center for the Study of Organized Hate (CSOH) Report Cover Page
Minority News

Synthetic Hate: How AI-Generated Imagery is Fueling a New Wave of Islamophobia in India

A groundbreaking report from the Center for the Study of Organized Hate exposes how AI-generated imagery is fueling a new, virulent wave of Islamophobia in India, documenting over 1,300 hateful posts and systemic platform failures.

Geetha Sunil Pillai

New Delhi- A disturbing new front has opened in the landscape of organized hate in India: the weaponization of Artificial Intelligence (AI). A groundbreaking report from the Center for the Study of Organized Hate (CSOH), a Washington D.C.-based nonpartisan think tank, meticulously documents how generative AI tools like Midjourney, Stable Diffusion, and DALL·E are being systematically deployed to create and amplify visual hate content targeting the country's 200-million-strong Muslim community.

Titled “AI-Generated Imagery and the New Frontier of Islamophobia in India,” the report provides an urgent and early intervention into a rapidly evolving crisis. It analyzes 1,326 publicly available AI-generated images and videos from 297 far-right accounts on X (formerly Twitter), Instagram, and Facebook, collected between May 2023 and May 2025. The findings paint a chilling picture of how accessible technology is being harnessed to dehumanize a religious minority, propagate conspiracy theories, aestheticize violence, and normalize misogyny, with real-world implications for social cohesion and democracy in India.

The report analyzes 1,326 publicly available AI-generated images and videos from 297 far-right accounts on X (formerly Twitter), Instagram, and Facebook, collected between May 2023 and May 2025.

The Alarming Scale: Key Findings from the Report

The CSOH report lays bare the scale, reach, and sophisticated nature of this synthetic hate campaign. The key findings are a clarion call for immediate action from platforms, policymakers, and civil society.

Proliferation of AI-Hate: Researchers identified 1,326 AI-generated hateful posts specifically targeting Muslims, with a sharp spike in activity from mid-2024 onward, coinciding with the wider availability of advanced AI tools.

Platform Amplification: While X hosted the most posts (509), Instagram drove the highest engagement, with 1.8 million interactions across 462 posts. This highlights the platform's potent role as an amplifier for visual hate content.

Dominant Hate Themes: The hateful content clustered around four dominant, overlapping themes:

  1. Sexualization of Muslim Women: This category received the highest engagement (6.7 million interactions), revealing a deeply gendered and misogynistic strand of Islamophobia.

  2. Exclusionary and Dehumanizing Rhetoric: Images depicted Muslims as animals (e.g., snakes in skullcaps) and as criminal infiltrators, garnering 6.4 million engagements.

  3. Conspiratorial Islamophobic Narratives: Long-debunked conspiracy theories like ‘Love Jihad,’ ‘Population Jihad,’ and ‘Rail Jihad’ were visually reinforced through AI imagery, receiving 6 million engagements.

  4. Aestheticization of Violent Imagery: The use of stylized aesthetics, like Studio Ghibli-style animation, made violent and hateful content palatable and shareable, particularly among younger audiences.

  5. Mainstreaming by Far-Right Media: Hindu nationalist media outlets, notably OpIndia, Sudarshan News, and Panchjanya, played a central role in producing and legitimizing this synthetic hate, embedding it into mainstream discourse.

  6. Catastrophic Platform Failure: In a test of platform enforcement, researchers reported 187 policy-violating posts. Only one was removed, demonstrating a near-total failure by X, Facebook, and Instagram to enforce their own community guidelines against AI-generated hate.

The fusion of AI with a well-established hate ecosystem marks a dangerous new normal.
A widely circulated Ghiblistyle image posted on Instagram depicted the demolition of the Babri Masjid in 1992

Deconstructing the Digital Poison: Major Narrative Themes

The report goes beyond numbers, offering a semiotic and narrative analysis of how AI is reshaping hate propaganda.

AI imagery provides a potent visual language for old, debunked conspiracy theories. The report details how:

  • ‘Love Jihad’ is depicted through AI-generated images of menacing Muslim men with skullcaps looming over frightened Hindu women.

  • ‘Population Jihad’ is visualized through scenes of burqa-clad women with many children in impoverished settings, framing Muslims as a demographic threat and economic burden.

  • ‘Rail Jihad’ emerged after real railway accidents, with AI fabricating images of Muslim men sabotaging tracks, transforming tragedies into evidence of a non-existent Muslim conspiracy.

2. The Dehumanization and "Othering" of Muslims

AI tools are used to strip Muslims of their humanity, a classic precursor to violence. The report highlights:

  • Animal Metaphors: A recurring image is a snake wearing a skullcap, framing Muslims as deceptive and venomous. This echoes historical genocidal propaganda where targeted groups were likened to vermin.

  • Framing as Infiltrators and Terrorists: AI visuals create a false reality of Muslims as illegal immigrants and internal enemies. Following the tragic Pahalgam terror attack in 2025, AI-generated images falsely depicted Indian Muslims celebrating the deaths of Hindu victims, channeling public anger toward the entire community.

3. Gendered Hate: The Sexualization of Muslim Women

This was the most engaged-with category, revealing a particularly vile strand of hate. AI is used to create:

  • Soft-Porn Imagery: The report identified hundreds of Instagram pages posting AI-generated pornographic or suggestive images of women in hijabs or burqas, often with Hindu men. These images, as the report states, function as "symbolic acts of violence," depicting Muslim women as objects of conquest without agency or consent.

  • Fantasy of Domination: Captions accompanying these images often explicitly frame them as the spoils of war, humiliating the entire community by targeting its women.

4. Sanitizing Violence Through Aesthetics

Perhaps one of the most insidious uses of AI is the aestheticization of violence. The report notes the widespread use of charming, Studio Ghibli-style art to depict:

  • The demolition of the Babri Masjid, a traumatic historical event for Indian Muslims, transformed into a celebratory and visually pleasing spectacle.

  • Real incidents of police violence against Muslim worshippers, rendered in a soft, animated style that desensitizes viewers to the brutality.

Dataset reveals that OpIndia has shared multiple AI-generated images yoked to narratives about forced conversions, abductions, and religious coercion by Muslims.

The report’s experiment in reporting blatantly hateful, AI-generated content yielded abysmal results. Of 187 posts reported across X, Facebook, and Instagram for clear violations of hate speech policies, only a single post was removed. This failure reveals a dangerous gap between the rapid evolution of AI-powered hate and the sluggish, ineffective response of social media platforms. This inaction allows "slopaganda"-cheap, abundant synthetic content- to flood the information ecosystem, gaming algorithms and crowding out credible information.

CSOH flagged 187 harmful posts featuring AI-generated images that were in clear breach of the platforms’ own community standards. Of these, 108 were on X, 37 on Facebook, and 42 on Instagram. The posts were reported under the platforms’ designated categories: on X as “Hate,” “Abuse & Harassment,” and “Violent Speech,” and on Facebook and Instagram as “Hate Speech” and “Calls for Violence.” However, only one of the flagged posts on X had been removed at the time of writing this report.

The CSOH report concludes that generative AI has become a powerful "force multiplier" for anti-Muslim hate in India. It is not merely a new tool for an old problem but represents a qualitative shift, enabling the creation of credible, emotionally resonant, and highly shareable hate content at an unprecedented scale and speed.

The implications are profound. For Indian Muslims, this synthetic propaganda deepens a climate of fear, social exclusion, and physical threat. For Indian society, it corrodes inter-community relations, undermines constitutional values, and weakens democratic institutions. The fusion of AI with a well-established hate ecosystem marks a dangerous new normal.

"The risks are profound," the report states. "For Indian Muslims, this visual propaganda deepens the climate of fear, humiliation, and social exclusion... Without intervention, AI-generated hate will harden into the architecture of India’s digital sphere."

The failure to act on reported AI-generated hate means platforms are not merely ignoring violations but accelerating new forms of digital harm. This exposes a dangerous gap between the rapid evolution of AI technologies and the sluggishness of current enforcement models. To close this gap, platforms must invest in AI-aware moderation systems, strengthen synthetic media detection, and apply their rules with consistency

Read the full report here:

ai-generated-hate-in-india-csoh.pdf
Preview

You can also join our WhatsApp group to get premium and selected news of The Mooknayak on WhatsApp. Click here to join the WhatsApp group.

"Attack on Judiciary's Foundation" : AGI Grants Consent To Initiate Contempt Case Against Lawyer Who Hurled Shoe At CJI

Jharkhand: Palna scheme providing benefit to working women, giving home-like care for children in Hazaribagh

RAS-2023: Ajmer farmer’s son tops, truck driver’s son secures 10th rank

Haryana: Dalit IPS officer Puran Kumar cremated after nine-day stalemate

Damoh Caste Horror: Madhya Pradesh HC Orders Strict NSA Action Against Accused in Viral Feet-Washing Video Scandal