The binge-watching problem

Netflix Streaming © freestocks

With the help of artificial intelligence, social media and video streaming platforms are specifically designed to extract the maximum amount of our attention and impact our decision-making. So what happens when an AI-driven voice assistant tries to curb your binge watching?

Emma Engström

It is a quarter to midnight on a Tuesday and you are asking your virtual voice assistant to stream the next episode. Seems an easy task for an AI.

The catch is that last week you asked that your voice assistant – maybe Alexa’s younger sister, say Alice – would help you cut down on binge-watching, at least on weekdays. She may even have proposed it, because you tend to arrive late for morning meetings, such as the one tomorrow. And she knows – the way an algorithm knows – that your employer has a work ethics policy that underlines punctuality. Also, she has registered that you are often agitated after you arrive late to a meeting, obvious to her from your heart-rate and the tone of your voice. She has also found out, based on your browsing history, that you are looking for a promotion within the company.

When she asked you if you were prepared to take measures IRL to increase your chances to get promoted, you agreed. In fact, last Saturday you told her that you do not want to stream any new episode after 11.30 p.m. on weekdays. Therefore, she has good reasons to believe that you would benefit in the long run from a good night’s sleep now.

So, what should Alice do? Is it obvious that Alice should run the next episode? Or should she try and keep you on the wagon? AI-driven voice assistant sitting on a desk AI-driven voice assistants might be handy, but they are gathering lots of data, too. | Photo credit: Clay Banks / Unsplash

Choice in an AI age

A new structure for human decision-making emerges with AI. Many of our choices are already implemented or shaped virtually. A large share of our lives play out online, and the other share – what we do IRL – is increasingly planned and evaluated online, for example when we use a smartphone app to book a taxi or rate a restaurant. AI algorithms may be infused by means of inconspicuous updates in websites and apps that we first visited or installed a long time ago.

In this way our behaviour increasingly take place under the influence of narrow AI. This includes machine learning algorithms that use deep learning architectures for personalized recommendations, virtual assistants that rely on automatic speech recognition to transcribe speech to text, and natural language processing to understand and interpret the meaning of text.

There are some notable differences between old-school recommendation algorithms and their adaptive AI-powered comparisons in voice assistants. The latter are likely to integrate more easily in everyday life, and therefore be used more often, collect more data and give better advice. So the tools are likely to be used more often, collect more data, and so forth.

The implications are that AI is likely to appear in more contexts than before, using other tools for persuasion than before, and that new types of decisions are influenced, both short- and long-term: the next meal, the next job – and the next episode.

AI Decision-Aid as Band-Aid

AI-powered decision-aids also seem to be a great promise for behavioural change. AI can be a clinically effective tool for habit formation, according to the psychologist and assistant clinical professor Cameron Sepah in an article in Medium. Sepah argued that an AI can reward habits as well as social reinforcement can, and he highlighted AI’s potential for gamified systems and its ability to adapt and vary the reward. He added that habits need reminders to regularly promote the desired behaviour, and that this is a suitable task for an AI.

It is clear that recommendations based on AI can be very effective. For example, the watch-time on YouTube increased 20-fold in the three years after Google Brain launched its deep learning powered recommender system, and such recommendations currently drive 70 percent of the time spent on the platform, according to the author Chris Stokel-Walker in New York Times. Cartoon depicting binge watching With a rise in online streaming services, binge watching is now easier to do than ever before | © Emma Engström

The trolley problem but with episodes

The binge-watching problem relates to the Trolley problem, a thought experiment presented by the philosopher Philippa Foot in 1967. It is about the choice between not pulling a lever and allowing a trolley to hit and kill five people, or indeed pulling it and killing one. In brief, this problem is about the difference between doing and allowing, as well as the duty to not inflict harm vs. the duty to render aid.

A do-no-harm approach to the Trolley problem is to not pull the lever and thus allow five deaths, while the utilitarian solution instead is to pull the lever and kill one to save five, as John Cloud articulated in Time. Survey studies in different demographics have shown that about nine out of ten thinks that the latter is the better choice, promoting the utilitarian approach.

Edmond Awad and co-authors in Nature discussed the Trolley problem to highlight moral decision-making by machines such as autonomous vehicles. In this case, the problem has involved a “driver” in a car and a group of five pedestrians. Since it is autonomous, the car will have to be programmed to detect any object in harm’s way, both one pedestrian and a group of five. It will have to respond to both types of obstacles. A course of action has to be specified in advance. An important point is that the difference between doing and allowing is more obvious for a human than for an AI-application – such as Alice. She may not have a passive choice.

The binge-watching problem has a different antagonism. The choice is about whether Alice should follow to your wishes back then when you wanted to reduce screen time, or now when you would like to continue watching. She may have a range of means of persuasion to promote your behaviour either way.

For autonomous vehicles the problem has been modified with a range of scenarios. This includes older or younger pedestrians, family members as car passengers, or illegal traffic behaviours. A corresponding tweak to the binge-watching problem may be that you make a criminal request, or that you make a command when you obviously lack self-control. What does Alice do if you slur from being drunk when you ask her to buy illegal drugs for you? Gemma Whelan in a scene from Game of Thrones Multi-series shows like "Game of Thrones" are popular with binge watchers | © PictureLux / The Hollywood Archive / Alamy Stock

It may not be that simple

There is a simple and – arguably – neutral solution to the binge-watching problem: Alice humbly informs you about your vow last Saturday to reduce screen time at night, and then she asks if you would like to continue watching anyway. This may be considered to be the passive alternative, allowing in the Trolley problem. However, note that the distinction between allowing and doing may not apply for an AI-application. This suggests that Alice instead should act in a way that increases your long-term benefit.

Further, if Alice has followed you closely over some time, she may have registered how you typically react when she humbly informs you of some long-term goal of yours. She may be all but certain that a humble recommendation is pointless, and that there would be many more fruitful ways to present the option of calling it a night to you.


Doing it for Alice

The binge-watching problem highlights that an AI-driven general virtual assistant may have to account for both long and short-term goals, and that it may have a large influence on whether those goals are reached.

Considering the wide support for the utilitarian approach in the trolley problem, the best option for Alice could be to refuse to instantly follow your command. This assumes that she has good reasons to believe that you would benefit in the long run from going to bed now, and that you would be more likely to reach your long term goals if she tried to convince you. There seems to be an imperative for Alice to roll up her sleeves and get involved.

If so, note that it is not obvious how far she should go in the battle between the earlier preppie and the current slacker in the couch. For example, she could ask, with an indignant voice, are you really going to watch another episode? Or, she could show you enticing photos of your dream house that you plan to buy as soon as you get that promotion. She could display statistics of their ages when your highest-earning LinkedIn connections first got promoted to a management position. Or turn off the screen, sound an alarm, and then make a soft water sound to make you go to the bathroom. (She might have figured out which of these is more effective.) Her options seem unlimited.

This article was first published on AI Futures a blog on the social impacts of AI by the Institute for Futures Studies in Stockholm. Learn more about Emma Engström's views on the future of creative AI here.

Recommended Articles

API-Error