Wednesday, October 8, 2025

The New Machinery of Hatred: How AI is Stoking Islamophobia in India

By New Age Islam Correspondent 8 October 2025 Summary: This is a summary article of the report published by Centre for the Study of Organized Hate (CSOH), titled ‘AI-Generated Imagery and the New Frontier of Islamophobia in India’ on September 29, 2025. This report documents the use of generative Artificial Intelligence (AI) to produce and disseminate anti-Muslim visual hate content in India. The full report can be accessed at: https://www.csohate.org/2025/09/29/ai-generated-hate-in-india/ ---- When people first heard about artificial intelligence, it seemed to offer more efficiency, creativity, and a way to escape boring tasks. But in India’s divided online world, AI is now being used for a darker reason — to create hate. A new report by the Centre for the Study of Organized Hate (CSOH) shows how AI-made images are being used regularly to spread Islamophobia on social media, giving old biases a new, tech-based twist. The report surveyed the staggering number of 1,326 posts between May 2023 and May 2025. It was trained on 297 public accounts on Twitter (X), Instagram, and Facebook. There were no random posts; they were purposeful visual campaigns that utilised AI tools like Midjourney, Stable Diffusion, and DALL·E. They were designed to spread fabricated images that sexualize, demonise, and dehumanise Muslims in India. The report's chief conclusion is straightforward: AI has become a new instrument in the spread of hate. Its immediacy, accessibility, and anonymity make for a very useful assistant to individuals who wish to poison India's public debates with suspicion between communities. Digital Hatred in the Algorithmic Era To understand the meaning behind the conclusions reached by the CSOH, one will first need to comprehend how the online world in India has evolved. With over 900 million internet users, India is presently the biggest social media market in the world, after China. Religious organisations, political parties, and diverse belief systems have made the internet sites to challenge opinions. But in the last ten years, this system has become more intolerant. In this unstable situation, Islamophobia — the negative portrayal of India’s 200 million Muslims — has become common online. Memes, fake videos, and hashtags like #LoveJihad or #PopulationJihad spread quickly, often mixing humour with hate. The CSOH report documents how the AI technology has given the trend further strength. "Before, a troll needed some skill — like Photoshop, editing, or time. Now, anybody can create convincing photos of Muslim 'villains' or conspiracies in a matter of prompts," a senior researcher in the survey says. "The velocity and volume are unmatched." Inside the Study: Mapping the Machine of Misinformation The CSOH’s research mixed numbers with a detailed study. Researchers started by finding almost 300 social media accounts — some verified and some anonymous — that often shared community or extremist posts. Over two years, they saved every AI-made image that specifically attacked Muslims. All the images were independently confirmed as AI-generated through scanning of metadata and forensic examination. They then considered user interactions: shares, likes, and views. Total? Over 27 million interactions. In short, millions of Indians have viewed — and, in the majority of cases, spread — AI-manufactured hate. The message was classified into four general categories in the research. Sexualization of Muslim Women The Exclusion and Dehumanisation Conspiracy Stories The Aestheticization Every group reveals a belief- and belief-based function of hate — ranging from causing people to feel frightened and ashamed to justifying violence. The Weaponisation of Hatred with Gender The most objectionable category, with nearly 6.7 million interactions, was the sexualization of Muslim women. Images created by AI depicted Muslim women in sexual or violent contexts, commonly with the aid of hashtags such as #HijabExposed or #LiberateFromIslam. “These images support two false beliefs at the same time,” says Dr Meera Krishnan, who studies gender and media at Jawaharlal Nehru University. “First, that Muslim women are oppressed and need to be saved. Second, that they are sexually available or misleading, like the ‘Love Jihad’ story. Both beliefs aim to justify control and hatred.” One such viral AI image showed a "hijabi" woman unwinding her scarf before a Hindu man, captioned "Truth behind Love Jihad." Another image showed Muslim women wearing burqas with guns, titled "Female Jihad Squad." This type of imagery blends the lines between fantasy and fright, twisting sex and violence. According to the report, this combination is no coincidence. It appeals to voyeurism and chauvinism — two of the oldest components of male-dominated thinking. From Dehumanisation to Digital Extermination The second wide-ranging category is that of projecting Muslims as non-human, extra-terrestrial, or threatening. AI images, the CSOH determined, depict Muslims as snakes, rodents, insects, or zombies — all as part of a visual narrative that renders religious identity as pollution. One such clear example is that an AI picture depicted a rat biting an Indian flag, whose fur had a crescent moon and star inscribed on it. Another picture depicted numerous cockroaches roaming on a map of India with the caption, "Clean India, Remove Jihadis." "These are not internet memes per se," asserts human rights activist Aqeel Ahmed, who has been following online hate campaigns since 2019. "This is typical fascist iconography — the same as was utilised in Nazi Germany or the Rwandan genocide. Dehumanisation necessarily comes before violence." The images of "vermin" and "disease" have been employed in propaganda towards genocide for centuries. Now, AI enables us to mass-produce these images, making them pretty and designed to go viral. The Myth Factory: AI and Stories of Conspiracy The third and potentially most significant group is what CSOH refers to as "conspiratorial visuals." In these images, abstract concepts — such as Love Jihad, Land Jihad, or Population Jihad — become clear visual narratives. One photo showed a Muslim man respectfully talking with a government official, with towers from mosques in the background. Another photo showed trains covered in green flags with the letters "Rail Jihad" on them, insinuating that there is a Muslim conspiracy to derail India's trains. These images make rumours something tangible," says one of the authors at the CSOH. "They make falsehoods seem true. Early in 2025, a fabricated AI-generated photo of a "Muslim mob storming a Hindu temple in Delhi" went viral on X, having been reposted more than 600,000 times. It was a computer-generated image — all AI — but was spread through no fewer than several verified influencers and was even reprinted by one right-wing news site. Before fact-checkers could confirm that it was false, the harm was caused. False information spread quickly than the facts — something that often happens in India's cyber hate environment. When Violence Is Art The fourth group — "aestheticised violence" — is a disturbing normalisation of violence. Most images show Muslims as either the agents or the targets of graphic, film-like violence. Some show "jihadi warriors" amidst scorched cities; others show "riots" with Muslims as the attackers, though these have never taken place. Such images include colours found in anime or paintings of fantasies, as hate is made glamorous. In March 2025, when the phrase "AI anime" was becoming a hit on Instagram, many accounts began posting stylised images of the "Hindu defenders" versus the "Islamic invaders." They had millions of likes. "The artwork will make violence look harmless," says IIT Bombay digital anthropologist Ritika Sharma. "But becoming accustomed to violence in pretty formats is far more perilous — that will make people feel any less moral." The Amplifiers: Platform and Media Challenges The CSOH report not only reveals who produces hate, but who spreads it as well. It is platforms like Instagram and X that, says the study, are the biggest sources of AI-produced hate images. X had 509 entries that totalled 24.9 million interactions. Instagram had 462 posts and the highest interaction per post. Facebook was less busy but participated as well, posting 355 times and generating 143,000 interactions. But worst, perhaps, is the involvement of Right-wing news outlets like OpIndia, Sudarshan News, and Panchjanya. The report chronicles several cases when AI-fabricated images emerged on social media and then were subsequently "reported" as news on these sites. One regional television channel aired a clip that suggested Muslim youths were stoning a Durga Puja procession. This was later found to be inaccurate, but no clarification followed. “This ecosystem works like a relay,” says CSOH’s co-author Aditi Menon. “AI-generated content starts on small accounts, then moves to popular media. Once it gets on TV or in print, it becomes credible.” Moderation Meltdown: Silence of the Platforms The most startling result of the CSOH survey is not just that hate content existed, but that the an almost total inability to control it. Of the total 1,326 posts surveyed, 187 were flagged by sites as violating their terms of service. None were removed. "They have AI-detecting mechanisms," says Menon, "but they fail to detect AI-bred hate. They fail to detect fabricated images when they are compelling or when the captions are misleading." The gaps are technical as well as political. In India, large technology firms are often pressured by the government, and they don't always regulate communal content. "When the government employs anti-Muslim narratives as a force, it is naive to anticipate equal treatment from non-state platforms," says digital policy expert Prateek Waghmare. The Legal Vacuum: India’s Unprepared Laws The European Union is conceptualising new AI regulations through the EU Artificial Intelligence Act, yet India is still unready. No such provision is seen under the Information Technology Act (2000) and the Intermediary Guidelines (2021) on the misuse of artificial media or generative AI. "Laws currently treat misinformation as content moderation issues, not violations per se," notes Supreme Court lawyer Aparna Rao. "We have no definition yet of synthetically created hate images, no liability on the creators of AI, and no codes on traceability." This gap, the CSOH report warns, allows a new kind of “grey propaganda” to flourish — content that sits between legality and morality, untraceable yet deeply toxic. The Human Toll from Internet Hatred The report shows that AI-generated Islamophobia is part of a ten-year increase in community tension. This includes lynchings in Uttar Pradesh and Haryana and the negative treatment of Muslim vendors during COVID-19. Now, hate messages in images spread faster, reach more people, and are less accountable. The psychological burden is heavy on ordinary Muslims. "When I go on Instagram, I will see these images — men in skullcaps with knives, women in burqas as terrorists," says Sana Ahmed, a Delhi-based journalist. "Even when you know that's not the case, it deprives you of the sense of safety. This degradation of dignity, the report concludes, is just as damaging as physical violence. It isolates minorities, aggravates suspicion, and undermines India's constitutional fabric of secularism. The AI-Hate Industrial Complex Behind every viral photo is a network of creators, distributors, amplifiers, and money makers. Some accounts make money from hate through advertising; others, through ideology. What's new, the CSOH report says, is that AI has industrialised the process. "With one instrument, a user can create a hundred different versions of the same anti-Muslim narrative, all in other styles and languages," says Menon. "It is very easy to get started, and the rewards, such as gaining followers, attention, or money, are enormous." This produces what professionals refer to as a "hate supply chain." Few motivated people produce content, computer programs disseminate information, and influential institutions make it the norm, all as platforms profit from interactions. The Road Ahead: Against AI Hate The CSOH report diagnoses, as well as prescribes. Its recommendations read like a survival guide through the age of artificial propaganda: Redraft specific laws directed towards India to handle AI-fuelled misinformation and hate images, and classify liability between users and developers. Require provenance metadata — hidden tags that track where AI-generated images come from and their creation history. Create independent digital adjudicatory bodies, modelled on Europe’s Digital Services Act, to oversee disputes around synthetic media. Ask AI model creators to incorporate "safety layers" that prevent the production of dangerous or violent material. Create a national team of researchers, NGOs, and journalists to track and document hate fuelled by AI. Make sure algorithmic transparency, with the platforms publishing data on how frequently automated hate is taken down or pushed. Introduce "circuit breakers" – filters that slow the spread of flagged content until it is reviewed. Through these steps, the commission maintains, the AI-powered hate will be prevented from spreading. The Mirror of Modern India The CSOH survey is more than a story about technology; it is also a story about society. AI did not create hate; AI made hate easier to spread. Political and cultural conditions that allowed Islamophobia to flourish in real life in India today work very well on the internet. When hatred is made easy and looks good, it becomes harder to spot and easier to accept. In this way, AI not only shows prejudice — it also teaches it. But resistance is visible too. Rights organisations like Alt News, Digitally Right, and Equality Labs have begun monitoring hate trends associated with AI. Freelance journalists and artists are employing the same tools to develop counter-narratives — highlighting India's diversity and everyday coexistence through digital art. But these efforts are meagre next to the enormous scale of propaganda. "Technology always mirrors its users," says Dr Krishnan. "The question is — will the people who spread hate or the people who heal use it more effectively?" Conclusion: Between Code and Conscience The story of AI-generated hate in India shows how new ideas, if not protected, can be used as weapons. In a society that is already divided by religion, these online falsehoods do more than just reflect bias — they create it, piece by piece. The CSOH report ends on a sombre note: "Without ethical governance, AI will not just simulate reality — it will rewrite it." The danger is not just technological, but ethical as well, in a culture that values diversity. India's internet future depends on the capability of its lawmakers, sites, and users to tell the difference between freedom and falsehood — imagination and malice. Ultimately, the worst code is not the one written by computer machines. It is the one written in silent ink on the human heart. URL: https://www.newageislam.com/muslims-islamophobia/machinery-hatred-ai-islamophobia-india/d/137149 New Age Islam, Islam Online, Islamic Website, African Muslim News, Arab World News, South Asia News, Indian Muslim News, World Muslim News, Women in Islam, Islamic Feminism, Arab Women, Women In Arab, Islamophobia in America, Muslim Women in West, Islam Women and Feminism

No comments:

Post a Comment