Can AI create art?

Interview Galina Dimova - Hans Bernhard © Pexels

Right before the beginning of Goethe-Institute’s EthicAI=LABS in Sofia, Ankara, Athens, Bucharest, Zagreb and Sarajevo, the project curator Galina Dimitrova-Dimova meets Hans Bernhard to talk more about his project The Next Biennial Should Be Curated by a Machine produced by the Liverpool Biennial and the Whitney Museum of American Art. How can AI change art and more questions would be explored in the project throughout 2021.

Galina Dimitrova-Dimova: Hello, Hans! We know each other for a long time, but I want to start our interview with a short presentation of Ubermorgen because you’re half of this duo, you and Lizvx. You started working together in 1995, as far as I remember. And so, what you’ve written in your bio, are European artists who work in installation, video, code and performance, doing strange things with software & hardware. Your early work is referred to as 'Media Hacking' and 'Online Performance', combining various forms of digital media into artistic action. In 2000, you created Vote-Auction, a vote-selling/buying online platform and you were described by CNN as 'maverick Austrian business people'. The New York Times called your 2005 'Google Will Eat Itself' project 'simply brilliant. And I do agree with this.  
I'm following your practice with interests and I have always admired your very critical point of view towards technologies and the power structures related with them. But what I mostly like and admire is your ironic approach, interpreting these issues. And if I should refer to one of my favorite projects from your portfolio is EKMRZ trilogy where you criticize the three giants in internet space: Google, Amazon and e-bay.

I want to now talk about this new project that you just presented “The next Biennial should be curated by a machine” that you describe as “An haute couture website as ‘terminal’ to a vast 'networked system,' reimagining the future of curating in the light of Artificial Intelligence and self-learning human-machine systems, generating 64 parallel instances of Biennials in flux”.  
So, please tell us more about the idea. I'm even more interested to hear who do you criticize in this project - the curators, the artwork, someone else, and do you really think that AI-agents should curate a Biennial rather than real curators? 

Hans Bernhard: Thanks for the introduction and the questions. The point is actually that it's already happening, so we believe that. During the research we found out that actually it depends on the perspective how you look at the world, but in a way, if you look at large institutions, or also small institutions. There are algorithms, there are systems, where certain processes are happening, that lead to exhibitions. You as a curator, you know how that is. You have to work with budgets, with sponsors, with directors of institutions, with artists. So in the end it's an algorithmic process.  
And “The next biennial should be curated by a machine”, which is a project that we initially started, and worked on with Leonardo Impett, he's a digital humanist from  the UK, and a professor in Durham, and Joasia Krysa, she's a professor in Liverpool and a curator, and works for the Liverpool biennial, the basic question was about machine curation. It was just an open question. We said - we're going to research that and see what can we find out, how are the systems configured? And that's actually what we first looked at - the existing system, for example, a Liverpool biennial or a Whitney museum, how do they work? What are the influences, who are actually deciding what ends up in an exhibition? It's not the curator, it's the people who put the curator in the position of actually being able to put, for example. But we're not criticizing that. For me this your perception of our work that we are criticizing, or ironically reflecting power structures. From our point of view, we are not, we are artists who are interested in playing with ideas and who are curious about finding out what's happening. 

Galina Dimitrova-Dimova: I am curious then to hear your opinion, whether machine can make not just better selection, but also produce new artist identities, because that is more or less what your project is offering. 

Hans Bernhard: Yes. I guess one of the everlasting questions is what are the definitions, what do you consider, are they underlying certain trends and certain situations that are ongoing in society, and also maybe in art education, in the art market, in the art system, where you can also see changes happening. And for us, the more important question is not how to reproduce, and that's why the website looks, and the project looks as it looks, it’s not to reproduce or to find out if a machine can replace the existing. Because the curators have become the new artists. That is my opinion. Because the artists have been kind of degraded to become craft people, to deliver a specific concepts, or digital or physical objects or configurations into a bigger picture that a curator designs, like a bigger idea, or a research project where then in the end there is an exhibition that maybe asks more questions or gives some answers or shows something.  
“The next biennial should be curated by a machine” is a more of an open take and just a thing that we throw out there where we say, on the interface side, it's really about entertainment. You just enjoy it, you know? So that's why you use vague temporary stuff. Like tick-tock sounds with very, in a way, old school art techniques, like gif animations, that give you this kind of entertainment. And then on the other side, you're actually confronted with machine learning algorithms that have been fed with data from the Liverpool biennial, from the Whitney Museum, but also with all other data, like interrogation data by police, music reviews for Rolling stones, where we have the access for a whole Nigerian film, industry, scripts, for example, and stuff like that. So the different categories of texts have been mixed and it's all text space. So you look, you go to the entertainment website and then you dive into generated fluid realities. That's why we use this metaphor of the universe or the metaverse where we say okay, and everything is fluid at the same time. This is just a take. So for the user - you can experience, you can just go in their experience and see, do you see anything that is interesting for you? Do you see any new type, because we're not telling you what to, we don't have a conclusion. 

Galina Dimitrova-Dimova: One last question, which is related to the EthicAI project we are involved in now with Goethe-Institutes in Southeast Europe. I want to hear what do you think about AI and ethics, how do you understand this issue, what are the milestones in these debates, according to you? 

Hans Bernhard: So we have to go back to the first question and say again, that we are dealing already with artificial intelligence, as I understand it, on a specific level, on the level of corporations. In the rest of the world, in the rest of the areas, we are not talking about artificial intelligence, it's the wrong word. We are using machine learning, which is, I would say now a subcategory of artificial intelligence, and machine learning is very specific. So one of the most important things that I have learned about AI or machine learning is that it's very, very specific.  
So, machines are able to look and solve very specific problems, you know, playing chess, or looking at weather data and then drawing, making predictions but on a very narrow way. Once you start to connect this data, or you start to expect from machines to become, to understand more about the world, they fail because, it's not possible, this is one of the sides. The other side is obviously the bias. The question is how the machine is built by humans. But this is very obvious, and it's fed data that is curated by humans. This is another aspect.  

If you talk about ethics, we have to consider this. And then, for me, the most important fact, as I mentioned initially, is that we are already really confronted with artificial intelligence in the form of, and this is nothing new, it has nothing to do with computers, in the form of corporations or also bigger institutions. There are algorithms that have a higher level of complexity, that have specific goals and that do everything to follow these goals and where humans are not the primary interest. It's not about humans, it's about making money. So, humans get kind of degraded, or we allowed these systems….  

But this poses another question, if you talk about ethics, this is how extremely anthropocentric we look at our world, which is by nature because we are humans. So we look at the world in a human way, it's a Catch 22, right? But on the other hand we have to deal with a lot of things that are not human. We deal with machines, we deal with algorithms. We deal with animals. We are animals. We have different species. We deal with plants systems and etc., etc. I have no answer. I don't know what applies in terms of ethics.  
What I know, and this comes from another field of research that we're doing, which is interconnected, and it deals with new tendency and tendencies in politics, right-wing politics, alt-right in cells, supremacy, white supremacy, technology supremacy.