AI and Violence Against Women Who Programmes Power?

An illustration of a fist reaching out of a computer towards the user.
Illustration: © Ricardo Roa

Violence against women has long since spread from the analogue world into the digital sphere. As the digital realm undergoes a profound transformation driven by the use and rapid development of artificial intelligence (AI), violence against women is also being shaped by AI. What risks does an AI-supported digital world pose for women? And could AI offer solutions to the problems we already face?

One in three women experiences violence in her lifetime. That amounts to 840 million women worldwide, according to a newly published statistical estimate by the World Health Organization (WHO). The number of unreported cases is likely to be much higher. Each November, the global community dedicates itself to this issue around the International Day for the Elimination of Violence against Women on 25 November. In the weeks surrounding this date, discussions centre on issues related to women’s protection from violence – domestic abuse, sexual violence, femicide - serious security concerns affecting half the population. For some time now, another topic has been added to the agenda: the violence that has spread from the analogue into the digital realm, increasing each year – and fuelled by rapid AI developments.

Digital Violence – What Is That?

“Digital violence” is an umbrella term for “a wide range of forms of attack aimed at degrading, slandering, isolating socially and coercing or blackmailing a person into doing something specific,” as defined by the German Federal Association of Women’s Counselling Centres and Women’s Helplines (bff). This may take the form of cyberstalking, unauthorised tracking, image-based abuse or hate speech. The bff warns that digital violence rarely occurs in a vacuum; instead, it often accompanies or reinforces analogue violence.

Hidden behind online anonymity, it is easy to spread hate and abuse. With a single click, abusive insults or even generated pornographic content can be sent across the internet within seconds—either directly to victims’ inboxes or as posts in public forums for all to see. The impact of largely unregulated AI technologies in this context should not be underestimated.

How Does AI Play into This?

The basis of violence against women, both analogue and digital, is sexism and patriarchal power structures. Within these, AI spreads almost unchecked: technologies learn from the systems and frameworks we provide. This learned image is reproduced, including prevailing power structures and prejudices, without critical reflection. AI becomes a representation of those who develop it: mostly male, white, privileged – for AI technology, a “normal” reflection of society. At Amazon, this went so wrong that the case made headlines: an AI tool designed to analyse CVs of applicants simply filtered out those containing the words “woman” or “girl”.

Yet the dangerous truth reaches further: artificial intelligence is also deliberately used as a tool against women. This occurs mainly in the field of so-called “deepfakes”: artificially created images and videos that use publicly available pictures of real people and look deceptively authentic. They can pose a threat to democracy, for example when AI is used to create a realistic-looking video of a female politician appearing drunk and incoherent. Far more often, however, deepfakes are found in the pornographic sphere: in 2019, Dutch company Sensity AI reported that 96% of deepfakes were of pornographic nature. An example is “nudifier apps”, which enable digital assault by generating fake nude photos. A publicly accessible photo is uploaded into the app, and the AI function generates false but realistic-looking nude images of real people with a single click.

Victims of these technologies are often women in the public eye – politicians and celebrities – but private individuals are affected as well: mothers, students, schoolgirls. AI models can even generate entire pornographic films based on images of individuals, usually without their consent. German actress and author Collien Fernandes recounted her traumatic experiences with digital sexual violence in an interview with Deutschlandfunk: “For me, this is clearly digital rape.” Previously, AI-generated deepfake nude images and pornographic videos of her had been circulated via a fake LinkedIn profile – some depicting violent content.

How Can We Combat Such Violence?

Regulation would help, but responsibility from developers and companies seems a long way off. Politics and the judiciary are struggling to keep up with the need for new laws to regulate AI, especially due to the rapid speed at which the technology is evolving. According to German law, there is currently “no specific paragraph prohibiting the creation and distribution of non-consensual deepfake pornography,” Deutschlandfunk reports.

New legislation is needed, but “regulation alone is not enough,” warns Eva Gengler, a doctoral researcher in business informatics specializing in AI governance and feminist AI. We must think beyond regulatory laws, not only containing risks but also considering how AI could be deliberately used to create a more just society.

The Solution to Violence Against Women?

Artificial intelligence itself is not the villain of the story. This is where Gengler’s research comes in, showing how we can harness this new companion for positive ends. “Fundamentally, this technology amplifies what already exists, which is not automatically bad. As an amplifier of society, it can also learn from the good and reinforce the good – if we want it to.” Through fair and more thoughtful prompting, clear objectives in AI governance, and a justice-oriented perspective in development, it would be possible to train technology to reveal the biases present in our society. “The purpose for which we use AI is crucial. What AI does is recognise patterns and reproduce them. Of course, we could also say: show us where the problem lies and do it differently!”

And in Practice?

Beyond questioning bias and promoting fairness, AI can even provide direct assistance in combating violence against women. In the case of hate speech, AI’s rapid analytical capabilities are particularly useful when dealing with large volumes of data. The Lamarr Institute for Machine Learning and Artificial Intelligence sees an opportunity here: “AI systems based on natural language processing (NLP) can automatically detect offensive or degrading content and initiate appropriate measures,” the institute explains. The Germany-based start-up “So Done” goes a step further: the company uses AI software to identify hate speech that is prosecutable. Users can take direct legal action against hate speech via the platform. Any financial compensation awarded is shared equally between the user and the start-up. “There are so many possible ways to use AI for good,” Eva Gengler concludes. It is up to us, as a society and global community, to decide how best to harness this advanced technology for our benefit.