Quick access:

Go directly to content (Alt 1) Go directly to first-level navigation (Alt 2)

Education
Lessons to learn

Lessons to learn: Impression of Tactical Tech’s “The Glass Room” project.
Lessons to learn: Impression of Tactical Tech’s “The Glass Room” project. | Photo (detail): © David Mirzoeff / Tactical Tech

What do young people need to know about artificial intelligence (AI) in order to use it responsibly and make informed decisions? Stephanie Hankey, co-founder of the Tactical Tech non-governmental organisation (NGO), talks about her experiences with young digital natives and the need for educational institutions and civil society to join forces to promote digital literacy.

By Harald Willenbrock

Stephanie, at the EUNIC’s AI Week in December 2020, you said that Tactical Tech aims “to engage young people with technology that is neither overly enthusiastic nor dystopic.” What exactly are you hoping to achieve?
 
We want to encourage critical thinking around technology and AI. When we are overly optimistic and enthusiastic about technology and how it could solve our problems, or the opposite, when we are overly negative and suspicious about how it could ruin the way we live, then we are no longer able to meaningfully or maturely engage with the very real challenges technology presents. We think the complicated middle ground is more fruitful and interesting. Sometimes technology is great and sometimes it is terrible; sometimes it solves problems and sometimes it makes them worse. We need to consider both extremes if we want to develop technology that is ethical, equitable and really works for society.
 
When Tactical Tech was founded at the beginning of this century, there was a widespread optimistic view of digital technologies as an equalising force that would foster democracy and public discourse. That has not turned out to be the case – why not? 
 
The data-driven technologies that dominate today have been profoundly shaped by business models that focus on accumulating assets for a small number of companies. They were not built for social good. Things like the advertising business model have greatly shaped the technologies we have, such as behavioural or emotional profiling and attention algorithms. Not only have these technologies been significantly shaped by these business models; they also reflect the ideals and values of our time, such as consumerism, individualism, efficiency and scale. Furthermore, these technologies were designed for an ideal world, for an archetypical or standardised world, and not for the real world with all its complexity. 
 
Can you elaborate a bit on the difference? 
 
Many of these technologies were designed for specific user scenarios, for imaginary ideal people, lives and scenarios that have little to do with the complexity, chaos and complications of the real world. Concretely, this means a company might design WhatsApp to connect people and – naively or neglectfully, it is hard to say which – not consider that people could also use it for political manipulation and hate speech. Or someone could design a platform for live video streaming intending it to be used to stream birthday parties and somehow overlook the real chance that it might also be used to live stream criminal and violent acts – which we have unfortunately seen happen.
 
Is that a concept you would also like extend to AI and machine learning (ML)?
 
Yes, it is the same problem. The widely used design and engineering methodologies are fundamentally flawed. There are two essential problems. One, as mentioned above, is that they are designed for an ideal and not a real world. With machine learning, for example, we have seen countless algorithms being fed biased data, including sexist and misogynistic content. They then amplify that bias back out in the results. The other issue is that technologies are being designed for individual users, but what we actually need are design methodologies for ‘society as user’. Until we overcome these two problems, we will keep repeating and amplifying the same mistakes.
 
To incite discussion and raise awareness about the impact of technology on our daily lives, Tactical Tech invented creative formats like the “Glass Room”. This exhibition space looks like an Apple store and staff members are called “Ingeniuses” – a nod to Apple’s “geniuses”. Nothing is for sale though. The interactive exhibition is meant to reflect on the impact and challenges of tech developments and show possible approaches to them. So far, the large-scale Glass Room exhibitions in Berlin, New York, London and San Francisco and the pop-up versions that are being set up by libraries, schools and exhibitions have had a total of more than 168,000 visitors. Why did you choose this unconventional format?
 
The format was crucial for engaging a truly diverse range of people in a conversation about how technology is changing our lives. Employing the visual and aesthetic language that is used to sell aspirational technology, and then subverting it to talk about what is wrong with technology, was a simple and powerful device we felt was essential to get everyday people interested in and relating to the content. 
 
What have been the most surprising experiences with the Glass Room?
 
Most surprising has been the range of people that the exhibition resonates with and how much they connect with the content. We have learned that everyone has a reason to care about how technology impacts their lives — from secondary school students to retired pensioners and from San Francisco and Seoul to Sudan. The interactive exhibition “The Glass Room” engages people in a conversation about how technology is changing our lives. The interactive exhibition “The Glass Room” engages people in a conversation about how technology is changing our lives. | Photo (detail): © David Mirzoeff / Tactical Tech What have you learned about young peoples’ relationships with digital technology, social media and potential risks through your interactions with them? 
 
We are still at an early stage in our work, but we have already discovered two important things. First, young people are really good at using technology, but that does not mean they understand how it works and have not yet been given the tools to figure that out in a meaningful way. Secondly, they care deeply about how technology is linked to the issues they see around them, but this is not currently included in the way they learn about or understand the world. When they learn about politics, democracy, the environment or geography at school, digital technologies are not integrated as a part of the equation for now and the future. There is lots of scope for work in this area. That is what Tactical Tech hopes to be able to do in the coming years through a process of co-development with young people and by working with existing cultural institutions, communities and educators.
 
In your own words, Tactical Tech is not looking “to tell young people what’s right or wrong, but to help them to find their way”. What does that mean in connection with AI? 
 
We have to start by building knowledge and an understanding how these technologies work and why they work the way they do. And we have to ensure that there is transparency around when and how AI and ML are used in the systems and services that impact young people. For example, a young person may need to know that an algorithm is being used to grade their exams or that ML is being used to nudge them during computer games and how this works. Only then can young people meaningfully participate in a response and make their own decisions about what they are ok with and what they are not. And we have to work from a rights-based perspective, giving them the agency to be part of the discussion and not simply taking a protectionist approach.
 
What does AI literate mean to you? 
 
I think AI literacy is a bit of a difficult term. It would make more sense to talk about digital literacy. For me, that’s about understanding how your technologies – regardless of what specific technologies you use - work well enough to make informed decisions. And it is about responding effectively to the problems you encounter and, when necessary, even holding companies and governments to account. AI literacy in particular is something I think we cannot just offload onto users. Governments, public institutions, researchers and perhaps even civil society need to take responsibility, though they are not currently AI literate either so they cannot make good decisions for others. I would like to see more AI awareness among users and an understanding of how it is linked to transparency from those who build and make it.
 
Who should be taking the lead in educating young people on how to deal with AI: state institutions like schools and universities, private initiatives like Tactical Tech, or someone else? 
 
I think we need a combination of all these actors. Each group brings a different skillset and I think they need each other if we are going to help young people tackle this huge issue. Tactical Tech may specialise in developing independent, accessible and topical materials that resonate with young people. But schools, cultural institutions and libraries are much better suited to adapting, localising, extending and working through such materials. We see collaboration among all these entities as the ideal scenario. However, peer learning is the most important and potentially powerful method and has to be at the heart of these initiatives.
 
Are all of these entities prepared and equipped to play a part? 
 
If schools, libraries, cultural institutions, community groups and youth groups work with the right specialists who are experts at demystifying technologies and focus on co-development and peer learning, then I think they are really well placed to reach young people and meaningfully engage them in these topics. A lot more needs to be done though, from creative programs through to curricula development. This is a rich area for innovation going forward and Tactical Tech hopes to be a major catalyst in this field.
 
What do you see as the most serious risk that young people tend to overlook with AI? 
 
I think it is too early to talk about young people overlooking problems with AI, since I don’t think they have been properly introduced to the topic yet. We haven’t explored the advantages and the disadvantages, the opportunities and the challenges in full and I think we need to.
 
As digital technologies become more important, do you see a future risk that the gap between young people along income thresholds, the ability to buy state-of-the-art-hardware and the skills to use it could deepen? 
 
This is a good question and you might logically assume that would happen. Instead we are seeing that technology is essential as a gateway for people living in difficult situations, as our research on smart phones as lifelines found. Young people from less privileged backgrounds often invest heavily in their technologies as they are essential for their access to the rest of the world and often instrumental to their ability to escape poverty or injustice.
 
Can AI help humans to learn better and better understand how we learn?  
 
Definitely. But in order to be effective, AI needs a large amount of data on each individual student. We still have so many problems to solve beforehand. These include bias, profiling, privacy, educational streaming, stigma, certain types of learning prioritised in the educational system, and many assumptions that go unchallenged. This is all complicated by the question of how people like teachers, parents and carers might incorporate these insights into learning, and what kind of pressure this could put on the learner. I am not sure even the best designed AI in the world can help us here. When we design these systems, we need to look at how these technologies impact real-case scenarios with real young people and not idealised, problem-solving models. Some recent scandals with online proctoring have shown just how much stress this puts on students. We also need to understand the problems data-heavy technology brings into non-ideal learning situations. Some of the problems I mentioned are inherent to data and some to education overall. The latter need fixing too, otherwise you are just adding technology to an already broken system.
 
What attitude would you like to see your children develop towards AI and ML?
 
I have two children and they are both old enough to learn about AI and ML. I try to help them understand how it relates to the world around them and how it impacts their experiences and the lives of others. Technology is not some separate issue; it is part of the way we live. And, whether we like it or not, we have to accept technology’s omnipresent role if we want to understand and deal with it effectively. It is so important that young people come to understand technology as a mirror and amplifier of our societal and political values. And to take an approach to learning that highlights all the pitfalls and all the trade-offs – only then can we support them in developing a useful attitude to both AI and ML.

Top