Criminal liability in the digital space Online Violence

A large badge from the UK charity Girlguiding with the words “Online harm is real harm. End it now” printed onto the material.
A large badge from the UK charity Girlguiding with the words “Online harm is real harm. End it now” printed onto the material. New research from the charity has revealed that over three quarters of 13- to 21-year-old girls and young women have experienced harm online in 2021. | Photo (detail): Ben Queenborough © picture alliance / empics

Supposedly funny images on Facebook, a fake dating profile on Tinder, insults on Twitter: in this interview, Anna Wegscheider from advisory organisation Hate-Aid discusses gender-specific differences in digital violence and the slow-moving judiciary.

Ms Wegscheider, you’re the in-house lawyer for Hate-Aid, one of the few advisory services for digital violence in Germany. What is digital violence?

We use the term digital violence because it’s a very broad term that captures a wide variety of phenomena. First of all, it contains the term violence, which, in addition to physical violence, notably also comprises psychological violence. Secondly, the focus is on the “digital”, meaning actions via technical means of communication or online. Digital violence is not a legal term and also goes much further than hate speech or hate crime, for example. These are comparatively narrow because they presume a clear objective. However, we also have cases where people are attacked or hacked via spyware, for example, or where the motives are unclear. These cases would not be covered by hate speech or hate crime.

Why is raising awareness for digital violence so important when there is so much physical violence in the analogue world as well?

Unfortunately, we frequently get to hear this misconception from victims as well. When they attempt to report a case of digital violence, they are sadly often still not taken seriously. At worst, they are asked by police officers why they would say these things online or told to simply log out of the respective platform. That’s incredibly unrealistic. These days, many people rely on the internet for professional purposes. And even if they don’t, the internet is one of the biggest discourse spaces we have. If I tell those affected by digital violence to withdraw from the net if they can’t take it, we push more and more opinions out of the discourse space. What’s left is, to put it bluntly, a small, aggressive, loud, often right-wing minority that takes over this discourse completely. Moreover, there is a strong underestimation of what digital violence does to people. The threatening scenario from the online sphere quickly spills over into real life as well. Closing your laptop is not going to help.

What kinds of cases do you see most commonly in your work?

The most common case is the classic insult – that means swear words, vulgarity and other forms of disparagement, followed by slander and defamation. For women, these are frequently degrading, often sexualised contents directed at their looks and their gender as such. What we’ve started to see more frequently in recent times is image-based violence. Those cases primarily affect women or people who are read as women; for men, image-based violence is factually irrelevant. This can be the publication or distribution of pornographic images or videos without the consent of the depicted person, for example, or the creation of a fake profile with a photo of the affected individual, such as a dating profile.

Which groups of people are primarily affected by digital violence?

As a rule, it can affect absolutely anyone, even regardless of the fact whether someone is active in social networks at all. However, certain groups such as journalists, politicians at all levels, scientists, activists and members of marginalised or discriminated groups are particularly frequently affected. As soon as these categories overlap or coincide, the risk of being affected by digital violence also rises exponentially.

Among the cases we see in our advisory services, there is generally a roughly equal distribution of genders, with a slightly higher proportion of women. However, as far as litigation financing is concerned, it’s more like 70 percent women and 30 percent men. Litigation financing means in a nutshell that we bear the cost of civil litigation in suitable cases, thus taking on the full financial risk for those affected. Based on our agreement with them, if the litigation is successful and the other party has to pay compensation, this money flows back to Hate-Aid so we can help out other victims in turn.

In my view, the larger number of women receiving litigation financing is due to a variety of factors: they usually only come to us when it’s about really extreme contents. The reason is that they are unsure whether something is digital violence in the first place or they don’t want to take up resources because they assume that others are much worse off.

Are there certain topics where you observe an increase in cases of digital violence?

The classics are the climate catastrophe, refugees and migration, feminism and women’s rights, more recently the COVID-pandemic and now increasingly the Ukraine war. These are topics that polarise people and are discussed in the public sphere with a lot of interest. Taking a clear position on them can relatively quickly turn into getting attacked for that.

Who is on the other side, who are the perpetrators?

That’s often not so easy to say. For example, the Federal Criminal Police Office statistics show that this specific type of orchestrated, strategically deployed hate on the internet predominantly comes from the right. It is also rarely really about the attacked individuals themselves but about what they stand for. The goal of this orchestrated hate is to try to force these people, and most of all their opinions, out of the discourse. In many cases, however, we do not receive any information about whether the perpetrator is male or female or what their motive was. Still, particularly when the person comes from right-wing circles, the perpetrator is usually a white male.

Discussions in comment sections can quickly become heated. What tells me that something is more than just a stupid comment and when should I get support?

When in doubt, I would say: always. If a line has been crossed for you personally, that doesn’t necessarily always mean something is criminal, but getting advice can still make big difference for you on a personal level. With our advisory services, and sometimes through litigation financing as well, we can provide support as required. We also help with things like bans on the disclosure of information from the civil register to prevent someone from finding out where a victim lives. However, you can also report digital violence directly because there is a widespread misconception: I don’t need to know which criminal offence I’m dealing with exactly, that’s the job of the police or the public prosecutor’s office. Conversely, the worst that can happen is that law enforcement agencies or the court tell me that the behaviour is not criminal. That’s also why we always advise people to report digital violence when in doubt, because this hesitation means that a lot flies under the radar.

The criminal offences that apply to digital violence are the same as for analogue violence. Would a separate criminal offence be desirable to improve prosecution?

It’s not always necessary to reinvent the wheel just because something happens in a different context, it would be better to plug the gaps in existing criminal offences. A good example: in 2021, the criminal offence of “rewarding and approving of offences” was amended; prior to that, only the so-called endorsement, the approval, of criminal offences that had already been perpetrated was criminal. However, women – primarily – are frequently told online that they should be raped, but the act has usually not been committed (yet). These threats were often not criminal because they weren’t sufficiently concrete yet. The legislature, of course due to a lot of pressure from civil society, saw a gap in criminal law and amended it so that telling someone they should be raped can now be criminal as well.

It would therefore be crucial to sensitise law enforcement agencies and the judiciary for these things. Let’s go back to the example of insults: there is still a notion that insults happen in private. The classic case: your neighbour insults you across the fence. However, the dynamics are completely different online. An insult is uttered publicly and remains there, freely accessible. Law enforcement agencies and judiciary need to understand that an insult in the private sphere isn’t the same as an insult online.

In recent years, prominent people like politician Renate Künast or climate activist Luisa Neubauer have successfully litigated against digital violence, attracting a lot of media attention in the process. Can I as an average citizen hope to be successful in reporting something to the authorities?

Whether you’re a public figure is completely irrelevant for criminal proceedings, same for civil litigation. In our litigation financing, we support a lot more people who are not in the public eye. However, they often don’t want this publicised in order to avoid yet another wave of hate, for example. That’s why people like Renate Künast or Luisa Neubauer are particularly suitable for high-profile litigation. They are in the public eye anyway and know the risks.

Otherwise, what’s more relevant for your chances of success are aspects like: who are the perpetrators attacking me, are they acting under their real names or anonymously? Which platform are we on? Particularly when we’re talking about image-based violence, for example, a lot of that happens on porn platforms. Porn platforms are more difficult to deal with per se because they are often based abroad and not subject to the same legal regulations as the big social media platforms. Overall, assessing whether a case promises to be successful therefore depends much more on the individual details.

Social media platforms also frequently have a hard time cooperating with law enforcement agencies. Is anonymity even still a timely concept? Would an obligation to use real names be helpful?

We have a clear position against an obligation to use real names. First of all, it’s a fallacy to believe that less hate will be spread if people have to use real names. There are very many people acting under their real names. Secondly: those who really want to threaten others would still have the technical ability to hide their true identity or use a pseudonym. In addition, an obligation like that doesn’t just affect the potential perpetrators but the victims as well. With the real names of those affected, perpetrators receive additional information.

On top of that, there are important reasons why online anonymity must be guaranteed. That may be investigative research or the work of a political opposition that is otherwise being suppressed in a country. But you can also try to strike a balance, which is what we advocate for: for example, a platform can say that real names or images aren’t required but a phone number is. Or platforms can be obliged to freeze IP addresses related to certain incidents. So there are absolutely options that guarantee a certain degree of anonymity while simultaneously making things easier for those affected.

There is still much to be done where both police and legislation are concerned. What is the biggest area that still needs work?

One of the biggest areas that still needs work is undoubtedly the notion that digital violence isn’t real violence. We need a change of mentality, we need to ask ourselves: what does it actually mean for us as a society if we don’t do anything about it? Following on from that, we will need to tackle the technicalities: we need to plug the gaps in criminal law, we need to make sure that criminal prosecution becomes more effective and efficient, we need to lower the threshold for civil-law options. Things like witness protection, for example, so I don’t have to provide my home address when reporting an offence. Or that it becomes easier to get a ban on the disclosure of information from the civil register. We also need more advisory organisations. Hate-Aid is currently one of very few places. We can’t complain about not receiving enough enquiries – on the contrary. Expanding these structures is imperative, or the case load won’t be sustainable in the long run.

The interview was conducted by Natascha Holstein, online editor of the Zeitgeister magazine.