„We and AI”
Beautiful, datafied world

Emilija Gagrčin, one of the authors of the study „We and AI Living in a Datafied World“
Emilija Gagrčin, one of the authors of the study „We and AI Living in a Datafied World“ | Foto (Ausschnitt): © Maria Gerasimova

As part of the project "Generation A=Algorithm", the Goethe-Institut, in cooperation with the Weizenbaum-Institut, conducted a survey on the concerns and hopes of young Europeans regarding AI. Emilija Gagrčin, one of the authors of the study, talks about surprising results in an interview.

By Juliane Glahn

Ms. Gagrčin, what particularly surprised you about the study?

Emilija Gagrčin: What really surprised me, is how relaxed many young people are about automated decision-making systems. That many of them have no problem getting fitness recommendations from such a system is perhaps neither surprising nor super-problematic. That said, a third of respondents had no problem with criminal proceedings being initiated against them on the basis of an automated decision. Furthermore, as many as 58 per cent felt comfortable or indifferent about predictive policing, the analysis of big data in law enforcement to identify potential criminal activity! I find that really alarming and to me it suggests that young people are not or not sufficiently concerned with the democratic and human rights consequences of such technology applications.
 
Researching news actively seems to be going out of fashion. More than half of all respondents believe that they are sufficiently well informed via social media platforms. Isn't this mindset dangerous?

Emilija Gagrčin: The truth is that many of us see online political content and news in our social media feeds without having searched for it. But this gives people the impression that they no longer need to actively search for news. This does not mean that people avoid news, but that there is simply a feeling that one can be informed even without actively seeking information. In my opinion, the fact that this mindset arises makes intuitive sense at first, but it does indeed harbour some problems. One problem lies in the way algorithms curate content, because this is done on the basis of previous user behaviour as well as attributed user characteristics. However, this leads to the fact that not all users are equally "attractive" for news. In concrete terms, this means that people who rarely click on messages and interact little with users who share news are also less likely to have news displayed. So in that case, the news actually doesn't necessarily find you. It is therefore of the utmost importance to cultivate active news consumption as a habit, i.e. to really integrate it into the fixed routine of everyday life.
 
What conclusions do you draw from the study?

Emilija Gagrčin: I think overall that understanding the workings and implications of algorithmic systems should become a cultural technique. In this respect, the report's findings highlight for me the need to provide adequate resources to help young people assess the opportunities and risks of AI on an individual and societal level and to empower them to navigate and act confidently in a datafied world. This is important in terms of exercising their rights, for example at work or at school. Established organisations such as the Goethe-Institut also play a crucial role in imparting skills and knowledge. For one thing, as experienced educational institutions, they know on the how to prepare even dry and supposedly uninteresting content in a way that is suitable for young people and bring it to them. And they also act as mediators between different actors from politics, business, culture, science and civil society. This creates very important networks that facikitate conversations and mutual understanding across professional and domain-specific boundaries. This is also very important, because not only young people have to develop these competences, but basically all of society.

Top