Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Data Protection
A Society is more than the Sum of its Individuals

What happens when apps become our new urban architects? The reason we urgently need to modify the regulation of digital applications.
What happens when apps become our new urban architects? The reason we urgently need to modify the regulation of digital applications. | Photo (detail): © picture alliance/Goldman

Facebook, Siri or Alexa – where artificial intelligence technology is concerned people often complain that there is insufficient protection of privacy. But intelligent systems are less interested in individual data than in generalising and standardising the life contexts they are automating. 

Zoom and Skype, social media and smart watches – but also smart phones, banks and insurance companies – use complex algorithmic systems that we commonly refer to as systems based on “artificial intelligence”. For them to function well, some of these collect a large amount of data – this applies for instance to providers such as Spotify, the Google search engine, Instagram, Siri or Alexa. 
 
Certain conclusions can be drawn from these data, which at times extend far beyond the musical preferences or search habits of users. For example: if a person listens to music that’s popular in the LGTBQI+ community, is it possible to make assumptions about their sexual orientation from this? How certain is this assumption? What about the cultural background of an individual who listens to Black music, R&B or music in a particular language?
 
Many of the data collected that seem trivial at first glance can become sensitive in a particular context, which is why the end of privacy is being lamented in the press and by the general public. But it’s often forgotten that these infrastructures are usually not interested in the actual private people behind the data per se. Even if they do gather and use personal data – such as when Spotify offers a selection of music customised to that individual – the actual personal data are irrelevant to the system as a whole.

Mathematical data pulp

The concept of “artificial intelligence” means socio-technological systems in which it’s difficult to separate the human and mechanical elements. These systems are based on assumptions about the world and humans, their goals and intentions, which are based on decisions made by humans. The same applies to the processing necessary for this, in the form of data. Data about people is translated into data puzzle pieces, which works like a jigsaw. The mathematical formulae used to process the data are amalgamated into a pulp – individual people are turned into averages and statistical profiles that are fine-grained but “generic” pigeonholed individuals. 
 
Using an image to illustrate this: the focus here is not on the single tree in the forest (the individual), it’s on the forest itself (society). The ambivalence and ambiguity of human life on the other hand cannot be fully translated into data and algorithms. These infrastructures aren’t geared that way either – in fact the intention is more to shape people’s lives, as observed by British law scholar Michael Veale. These systems are used in order to automate certain processes. It means that for processes in which there might have been various options and routes with a manual approach, standards are set that no longer permit this flexibility. Starting with the decision of what is datified and what is not: what is considered to be a datum, for instance, and in what format, is a decision about what is seen by the system and what does not exist in the system because it hasn’t been datified. So the way in which access, participation and interactions happen within a service is dictated by the standards, data formats, datification decisions etc. of the systems.
 
This is exactly where the essential problem with regulating these systems is revealed: fundamental rights and data protection are the rights of individuals; the entire system of fundamental rights and specifically subconstitutional law are oriented around the individual. In other words: the German legal system only recognises trees, not a forest, where fundamental rights are concerned. The technologies are regulated with the current instruments of law as if it was possible to control the forest through the regulation of individual trees. 

Identifying crimes in a cluster

A good example of this incongruity between the legal system and the AI systems currently in operation is predictive policing systems. These systems identify behaviour patterns for different crime categories, so that they can use the information to prevent similar crimes. Since these technologies correspond to an infrastructure while only individual rights are regulated by data protection, some of the programmes used in Germany are evading the current statutory regulation. In several parts of Germany for instance the state police uses the Precobs software, which predicts burglaries with a particular modus operandi within a certain timeframe and geographical parameters by using anonymised data about the type of crime, timescale and geographical data. The regional data protection authorities in those areas have authorised use of the software – as no personal data are being processed and therefore this does not fall within their remit. The Precobs software is used in locations including Baden-Württemberg and North Rhine-Westphalia with the intention of preventing burglaries. The Precobs software is used in locations including Baden-Württemberg and North Rhine-Westphalia with the intention of preventing burglaries. | Photo (detail): © picture alliance / Zoonar / Robert Kneschke But the system does raise a few questions: If a bigger police presence is suddenly visible in certain districts, does it make the population feel safer? Or does it even lead to a mass exodus of residents who can afford accommodation in another part of town? These questions are not defined in legal terms. Research shows that even the presence of police at football matches increases hooligans’ propensity to violence. For this reason police officers in civilian clothing are deployed. 
 
The question about the social effect of programmes like Precobs has neither been considered in political terms, nor have provisions been made in law. From a legal perspective the attention has been exclusively focused on whether (in data protection law) individual rights are being violated by the software or not. Here it becomes apparent that the social stability of an entire town cannot be mapped out firstly with predictive policing tools and secondly with individual regulations. For this reason it is regularly ignored.

Taking social values into consideration

Sociotechnical systems like predictive policing show how the community is not seen as a whole in an individualist society. A society is more than the sum of its individuals and requires more than just rights for individuals – instead it needs to balance these with social values. This must always be taken into account when looking at algorithmic systems. Democracies have neglected to address the latter. That’s our homework for the future.

Top