Digital Services Act Taming the Data Kraken

Anger, fear, shock: Recommendation algorithms prioritise emotionalizing content – with significant consequences for how we form opinions on the web.
Anger, fear, shock: Recommendation algorithms prioritise emotionalizing content – with significant consequences for how we form opinions on the web. | Photo (detail): © picture alliance/Sergey Nivens/Shotshop

The content we find on the internet – which search results, videos and social media posts come up first – is increasingly determined by a few giant internet corporations. As such, they influence not only our consumer behaviour, but also how we form our political opinions. The EU’s Digital Services Act is designed to ensure greater transparency.

Quickly googling a bit of information, sending a message via WhatsApp, buying a new pair of shoes on Amazon, watching a video on YouTube or scrolling through your Instagram feed: We automatically associate most of the things we do on the web with certain apps or websites.

That is because our internet usage has become increasingly concentrated in just a few hands in recent years: Much of our online activity takes place on platforms operated by Google, Facebook and Amazon, which also include WhatsApp, YouTube and Instagram, among others. Over the years, these companies, whose business model is largely based on collecting personal data, have achieved a quasi-monopoly. The huge network effect of their platforms makes it difficult for small competitors to enter the market while also making it nearly impossible for internet users to bypass them. This means their algorithms have a decisive influence on our online interactions, impacting what content we see, what news and opinions we engage with, and what products we are offered. This gives these internet giants major influence not just on our consumer behaviour, but also on the way we form our political opinions.

Trapped In a Filter Bubble

The crux of the matter is that it is almost impossible for outsiders to understand the criteria used to select and sort content for us. Which posts appear prominently on Facebook, Instagram or YouTube, for example, and which do not, depends in part on paid content – posts that are played more frequently in exchange for money. In addition, algorithms are used to select what content might be particularly suitable or exciting for an individual user and to place it more prominently. No one knows what logic and criteria the algorithms base these recommendations on, as the source codes are kept absolutely confidential.

Prioritization has side effects that can even impact our view of the world. If you watch speeches given in the Bundestag by members of the Alternative für Deutschland (AfD, Alternative for Germany) party, for example, recommendations quickly take you to supposed economic experts who warn of the imminent collapse of the system and in some cases openly incite anti-Semitism. This effect is known as the filter bubble. While in theory we can access a variety of information and opinions on the internet, we usually only see what already corresponds to our own take on an issue. Recommendation algorithms also prioritise emotionalizing content, meaning that hate speech and conspiracy theories, for example, are particularly widespread in these circles, while moderate voices tend to go unheard.
Google, Amazon, Facebook: A large part of our online activities takes place on platforms run by just a few giant internet corporations. Google, Amazon, Facebook: A large part of our online activities takes place on platforms run by just a few giant internet corporations. | Photo (detail): © picture alliance/dpa/Jiji Press/Fumiyasu Nakatsuji The impact of recommendation algorithms and the fact that their mechanisms are kept secret have recently attracted a lot of criticism, as has how easily hate speech and deliberately launched false stories – fake news – achieve wide reach online. So far platform operators have taken rather half-hearted action to address these problems, preferring to call on users to take personal responsibility and use common sense. Even introducing labelling that marks some spreaders of fake news as misleading has done little to change the fact that the algorithms continue to introduce many users to hate speech and right-wing extremism bit by bit.

An Attempt to Introduce More Transparency

A legal framework is needed to rein in this dangerous dynamic. The European Union (EU) has drafted the Digital Service Act of December 2020 in an attempt to establish basic rules for corporations, though it still has to be ratified by the European Parliament and member states. The act stipulates that large platforms with more than 45 million users worldwide will be obliged to disclose their algorithms for independent audits in future. In addition, users are to be given the opportunity to switch them off individually.

Like the General Data Protection Regulation, which has been in force at an EU level since 2018, the Digital Services Act could create some transparency at least. It also provides for some potentially severe penalties based on a company’s global annual revenue, which could easily run into the billions for the larger platform operators. The proposal also establishes minimum interoperability requirements, which could help to break up the platform monopolies a bit at least. In practical terms, this would mean that messengers such as WhatsApp would have to enable message exchange with other services, making it easier for smaller providers to gain a foothold in the market.

If ratified, the Digital Services Act will be an important step towards regulating internet companies. It will not, however, change their basic business practices, because the collection, analysis and trading of personal data will still be possible with very few restrictions. The fact that illegal content must be removed more quickly is also welcome in principle, but it remains associated with the risk of overblocking. When in doubt, Facebook, Google and others have not taken any risks so far, preferring to delete too much rather than too little. The fact that posts that do not violate the terms of use nor applicable laws are already regularly being taken down raises a fundamental question that even the Digital Services Act only tentatively addresses: How much power do we want to give Big Tech and their algorithms over our communications? They have become powerful players and so far eluded almost all public scrutiny. If ratified in its current form, the Digital Services Act has the potential to curb some uncontrolled excesses. But it can only be a first step along the way to a digital constitution that clarifies the responsibilities of new media.