“I put the participant in the middle of the experience”: Karen Palmer on AI storytelling

A scene from the immersive film experience “Perception iO” © Karen Palmer

Karen Palmer is a self-dubbed storyteller from the future. The British artist combines film, AI, gaming, art and behavioural psychology to create immersive film experiences that change and evolve the story depending on the viewers’ emotional reactions.

Barbara Gruber

Karen Palmer started her career as a producer of music videos and television adverts but was drawn to immersive storytelling and technology. After experimenting with wearable tech and electroencephalogram head sensors in film, she sought out further new storytelling experiences. That’s when Palmer saw the potential of artificial intelligence (AI) in her storytelling art.
 
Karen Palmer Karen Palmer | Credit: Steve Ambrose Together with computer scientists from Brunel University in London, neuroscientists from New York University and the creative research lab Thought Works, Palmer developed and trained the AI tool EmoPy to detect facial expressions and identify emotions such as anger, fear and calm.
 
The Goethe-Institute spoke to her about her projects RIOT, Perception iO (pictured above) and Consensus Gentium and how AI will change the way we tell stories in the future. 
 
Your work RIOT is an emotionally responsive installation using facial recognition and AI to navigate through a dangerous riot. What interests you about riots and using AI to explore this very immersive way of storytelling?
 
The murder of Michael Brown and the Ferguson riots in America really affected me, and a spate of other riots where young black black men were murdered by the police. Then there was Eric Garner and "I can't breathe”. Even though I'm not American, I'm a black chick, and it’s affected me. I was watching how riots were being represented in the media. And I thought, ‘no, no, it's not some form of renegades’, it’s like Martin Luther King says: ‘the riot is the voice of the unheard’. I wanted people to be in the middle of some form of my interpretation of a riot environment and being inside that space and have the experience respond to you depending on your emotions, and just give you some insight to the complexities.
   
I’ve created this live site installation, which you step into with a projection of a film in front of you — and around you is ambersonic sound, so that's as close to the human ear as possible — it's very visceral. Under your feet, it may be debris from a riot and parts of the set design kind of spills out of the screen. And as you watch the film, there's a webcam watching you back.
 
So it's hardly detectable because the lighting in the space is quite atmospheric. But as you watch it, the camera is watching you. And as you make a response to the film and what's happening, it will log that emotion, mathematically, the points in your face, and then it will say: ‘OK, at this point, you were scared’. ‘Or at this point you were calm or angry’. And then that will tell the film, at this point when they're angry, you must branch to the response from a cop that if you become aggressive towards them, they will respond in a particular way.
 
So the machine learning is the component of the facial recognition. It’s an engine where we've created this data set which is labeling ‘this is a person's face that’s angry’, ‘this is a person's face when it's calm’. And then it will look at you and go ‘bling — that's a match!’ and then the narrative will branch.
 
Bias is very much at the centre of your work. There’s been a lot of criticism around facial recognition tools, for example, for recognising white males much better than non-white people and females. How have you trained your AI on emotion detection to avoid falling into the same trap?
 
When I did the first prototype of this iteration that wasn’t part of our conversation, it was more about the mathematics of the face at that time. When I developed the system further in 2017, we were like looking at the data that we collated because we developed a bigger data set of emotions. When it came to say, ‘someone's calm’, or ‘someone's angry’, we said, ‘who's going to make that fundamental decision?’ Because if we were to get a white cop or a black woman, one of them may say ‘that's calm’ and one may say ‘that's angry’. And so we got a diverse range of people to help us label the datasets to minimize the possibility of potential bias. However, through doing that process and because I'm a research-based artist, my team and I came to the conclusion that it was almost impossible to eradicate bias. Most of the artists will tend to just buy maybe IBM Watson or another AI system. But it's fundamental to my practice to look and explore these questions first hand and say: How are we going to label it? Who's going to label it? If we have more black people or more white people labelling it, you know whose bias is going to be there. It's not if there is going to be bias or not, there are going to be biases. Whose bias is there?
A visitor to the Peru Museum of Modern Art watching “RIOT” A visitor to the Peru Museum of Modern Art watching “RIOT” | Credit: Karen Palmer You also explore bias in the installation “Perception iO” which brings together AI, emotion detection, tracking eye movement to reveal our gaze and our perception of reality. Can you explain how “Perception iO” reveals our unconscious, racist bias?
 
RIOT was making people conscious of their subconscious behaviour. Perception iO went one step further - it makes people conscious of their potential implicit bias. I created Perception iO, which is the future of law enforcement or what’s going to replace the police - automated AI systems. Who is going to train those AI systems? It’s not going to be brown people like me, it's going to be the people who are currently training cadets or developing these systems of institutionalized racism. But I put the participant in the middle of the experience by saying, you are going to be training the AI, you are going to be the cop, you're going to come into contact with both a black and a white protagonist. They may have mental health issues or they may be criminals, but you won't know. But there are signs within there that if you look with your gaze, you may see. If you respond angrily or aggressively as the person watching it, maybe that person will be arrested or shot. And if you respond differently, maybe calm, maybe they'll be able to come to a good conclusion.
 
So it has the potential to make you aware of your implicit bias because maybe you responded differently to the white person than you did to the black person. I'm not saying to people ‘oh, you are racist’. As an artist, I’m creating an environment for self-reflection.
 
How do you then get people to act upon this realisation of this awareness and then actually start implementing and creating change?
 
My third project currently in development is Consensus Gentium. The objective is to enable people to become conscious of their subconscious behaviour, and move through fear to have agency in their life. It’s based upon the premise that we've currently been jettisoned to a dystopic future. If we continue on this trajectory, what would be the potential extension of this dystopic present, and in this even more dystopic world? That’s one world that I'm creating. The other one is in order to deviate from this dystopian world, we need something more utopic, so what would we need to do to deviate? For the participant, you will experience the dystopic and the utopic and you will experience your role within it and your acquiescence, apathy or compliance and where that will take you and us.
 
AI filmmaker Karen Palmer Moving fast: AI filmmaker Karen Palmer calls herself a “storyteller from the future” | Credit: Karen Palmer I spent the last nine months developing this Hack The Future Lab initiative, where I had thought leaders who were techno activists, neuroscientists, elite free runners, psychologists, people at the cutting edge, spiritualists at the intersection of art, tech, science. I started to create these imaginary worlds and now we've developed three immersive scripts.
 
You call yourself a storyteller from the future. How powerful is this cultural technique of storytelling, the use of narrative to explore and imagine the future?
 
To me, it is fundamental. I've always been very forward focused and always speaking about things, that people have told me, which have come to pass a few years later. And I can see a lot of things which are super apparent that are coming, but most other people aren't connecting those dots.
 
In terms of storytelling, if I can put you in the middle of an immersive world, then you can experience the future today, your role within it and the part that you play. It is a very visceral, tangible thing, connecting your gut and emotional side. While I think a lot of the time we're acting and we're being manipulated in different parts of our brain, like say, social media and things like that, I want people to kind of disentangle from that and just experience the world and have their emotions and their subconscious guide them. So to me, there's nothing more powerful than enabling you to experience your potential future, based upon your actions and your agency — or your lack of agency —  to kind of be a mirror to you as to where we're going.
 
Storytelling is very important in creating identity and culture. Do you see a risk that through the ever increasing use of AI online and everyone getting their own timelines and personalised experiences, we end up not sharing common stories anymore?
 
No, not at all, because the whole angle of my work for the past couple of years is about living through fear and perception of reality. None of our perception of reality is objective, it's totally subjective. And if I can enable people to somehow overlap in their perceptions of reality, I feel then we can get some greater cohesive understanding not just of ourselves, but each other. I don't feel it's going to separate us more, that we're going to have all these separate stories. I feel that we’ll have a deeper insight into each other's narratives.

Recommended Articles

API-Error