Young Europeans and Artificial Intelligence Online Survey We and AI
What are the concerns and hopes of young Europeans about artificial intelligence? The study “We and AI” tried to find out. Emilija Gagrčin, one of the authors, talks about expected and surprising results in an interview.Ms. Gagrčin, what particularly surprised you about the study?
To be honest, a lot of findings! On the one hand, I thought the differences between countries would be much greater, but apparently young people in Europe have much more in common when it comes to technological issues than originally thought. At the same time, our report reveals some significant differences that are rooted in the respondents' level of education.
What really surprised me, though, is how relaxed many young people are about automated decision-making systems. That many of them have no problem getting fitness recommendations from such a system is perhaps neither surprising nor super-problematic. That said, a third of respondents had no problem with criminal proceedings being initiated against them on the basis of an automated decision. Furthermore, as many as 58 per cent felt comfortable or indifferent about predictive policing, the analysis of big data in law enforcement to identify potential criminal activity! I find that really alarming and to me it suggests that young people are not or not sufficiently concerned with the democratic and human rights consequences of such technology applications.
Can you say some more about the educational differences and the country differences that came to light in the analysis of the study?
People with a low educational level are less confident to talk about AI topics than people with a higher one and are more pessimistic about their professional future in connection with AI. This was to be expected from a social science point of view, but it is still not very satisfactory to see it in the data.
In fact, we were able to see some country differences in the questions about the use of AI in different areas. For example, people from Germany and Poland were much more likely than people from Italy and Sweden to say they were uncomfortable with the automatic setting of working hours. Overall, Swedes felt most comfortable with AI use in various areas.
I wouldn't say that active research has gone out of fashion, I would instead try to create an understanding of what underlies this perception. People live in an algorithmically driven multimedia platform environment. The truth is that many of us see online political content and news in our social media feeds without having searched for it. But this gives people the impression that they no longer need to actively search for news. This does not mean that people avoid news, but that there is simply a feeling that one can be informed even without actively seeking information. In my opinion, the fact that this mindset arises makes intuitive sense at first, but it does indeed harbour some problems
“Active engagement with the news is one of our civic duties that cannot simply be outsourced.”
And yes, this is "dangerous" insofar as studies have shown that people who think that "news will find them" are less active users of traditional media over time and that this phenomenon can have a negative impact on political interest and voting behaviour.
It is therefore of the utmost importance to cultivate active news consumption as a habit, i.e. to really integrate it into the fixed routine of everyday life and to search for information directly in the news app, newspaper website or media library. This is the only way to really have some control over what you see and not just to rely on some algorithm or other. Active engagement with the news is one of our civic duties that cannot simply be outsourced.
Political ads in particular are not considered useful by respondents - but other personalised ads are. Why are political ads less popular? What does this say about the perception of political ads in an online space?
There is nothing new about the fact that people have an aversion to advertising. It has also been documented in studies on television use. A common complaint, for example, is that advertising interrupts the flow of entertainment and is therefore simply annoying. More recent studies show that when it comes to political content on social media, people's judgement tends to be more morally laden
“People generally don't like to feel manipulated, and political microtargeting can easily be seen as doing exactly that.”
Another point that could play a role, especially in relation to political advertising, is awareness of scandals such as Cambridge Analytica, which exposed the mass manipulation of voters on the internet. People generally don't like to feel manipulated, and political microtargeting can easily be seen as doing exactly that. How is it that people tend to like commercial advertising: As I said, social media is usually used for escapist or identity-building purposes, and cultivating a lifestyle, which can include consumption, is part of that. Some platforms serve ads in a highly intuitive and well-curated way, so that the ads seem relevant and interesting.
According to the study, young people prefer educational institutions as standard setters rather than tech companies. To what extent are educational institutions qualified in comparison to tech companies, also with regard to the equipment (personnel and technical as well as financial and time)? What are the arguments in favour of educational institutions being standard setters?
I read this result as a desire for people whom students trust to be involved in technology development. This does not mean that schools will suddenly become responsible for developing technology. But at the moment, some things are problematic.
For one thing, especially during Corona, products from all over the world are being used in the classroom. Not all of these products are in the European educational tradition and not all of them are compliant with our legislation. For instance, I recently learned that some Chinese products are being used in the Netherlands that are not in line with the GDPR. This is certainly also related to the fact that during Corona they had to act quickly and took the most accessible, cheapest or most popular products.
From conversations with teachers, I also know that in many schools, the whole digitalisation process stands and falls with committed teachers. Not everyone can afford to acquire this knowledge as a sideline. This delays the implementation of digitalisation immensely. In addition, I see state authorities - in Germany or in the EU - as having a clear responsibility to check products for their legal compatibility before they are used here. That the learning data of young Europeans could end up in the hands of the Chinese government is unacceptable.
The second problem is that commercially developed products do not necessarily always have the students' best interests at heart. Companies would have to involve school staff and even students in development processes or testing phases. This could help ensure that technology is oriented towards people and not only towards what is technologically possible.
What conclusions do you draw from the study?
I think overall that understanding the workings and implications of algorithmic systems should become a cultural technique. In this respect, the report's findings highlight for me the need to provide adequate resources to help young people assess the opportunities and risks of AI on an individual and societal level and to empower them to navigate and act confidently in a datafied world. This is important in terms of exercising their rights, for example at work or at school When I talk about providing adequate resources, I simply mean equipping the formal and non-formal education sector with human and financial resources to be able to offer programmes that focus not only on technical but in particular on democratic skills education in relation to datafication and AI. I am well aware that schools are overwhelmed with everything, so I think that there should be much more focus on cooperative programmes between the formal education sector and youth organisations at the national and European level.
Established organisations such as the Goethe-Institut also play a crucial role in imparting skills and knowledge. For one thing, as experienced educational institutions, they know on the how to prepare even dry and supposedly uninteresting content in a way that is suitable for young people and bring it to them. And they also act as mediators between different actors from politics, business, culture, science and civil society. This creates very important networks that facikitate conversations and mutual understanding across professional and domain-specific boundaries. This is also very important, because not only young people have to develop these competences, but basically all of society.
Study “We and AI”
Young adults in Europe (18 to 30-year-olds) were able to express their fears, reservations and hopes with regard to artificial intelligence in an online survey starting in September 2020. The results have been collected in a representative survey that provides information on how Europe's young population currently perceives artificial intelligence and what importance and what importance they attach to it, both now and in the future.
What do young Europeans think about social media? How often do they use them? What do they use them for? Do they trust the providers of these ostensibly free-of-charge platforms? And do they know enough about the algorithmic logic behind the personal feeds? These questions were put to 3,000 young adults in a survey that is unique in Europe. As part of the Long Night of Ideas, organised by the German Federal Foreign Office, AI experts will evaluate and subsequently discuss the answers.
From November 2020 to January 2021, the Goethe-Institut and the Weizenbaum Institut worked out an extensive survey for the format We and AI. The opinion research institute respondi then conducted a survey in six European countries on the importance of digital technologies in the everyday and professional lives of young people. Based on the responses, a representative picture of the mood about the use of AI as well as prior knowledge and assumptions about AI emerged.