Many current questions arise in the context of artificial intelligence (AI) and ethics. With a series of open lectures and panel discussions, the EthicAI project aims to initiate a discourse on these topics across national and genre boundaries.

The joint project invites multidisciplinary dialogue. Central topics from the field of linguistics are language acquisition, language processing and translation as well as linguistic communication between humans and AI. In the field of bias, aspects such as cognitive processing, human error, discrimination and data protection will be discussed, as well as AI that eliminates or reinforces human prejudices. Creativity through algorithms as well as copyright and censorship are in the focus for the area of creativity. And in the topic area of media, aspects such as the use of private data for political polarisation, propaganda and advertising as well as the future of deepfakes and similar media manipulations will be discussed.

EthicAI's goal is to open a cross-genre space for critical conversation in  Southeast Europe.

EthicAI is a project by the Goethe-Institutes in Athens, Bucharest, Ankara, Sarajevo, Sofia and Zagreb.

Let’s talk about AI and ethics 
In February 2021, the Goethe Institutes of Southeast Europe began work on the EthicAI Labs project, which aims to research and discuss topics related to AI and ethics. It is a continuation of the project of the Central Goethe-Institute - Generation A = Algorithm and now the emphasis is on the ethical aspect of the interaction between humans and machines.
This discourse seems relevant today and a number of initiatives in this direction prove its importance. In recent years, numerous conferences and initiatives have been held, bringing together representatives from various fields (science, technology and culture) to discuss the trustworthiness of AI. 

But why should we care about this? Because, as most researchers on the subject say, technology is not neutral. In the first place, it follows the agenda set by its developers, who in most cases are powerful corporations already mastering the digital sphere through their services and products. Secondly, it includes the people involved in the process of developing and refining these products through training of the software agents. Then it is the turn of the users, who, depending on the way they use it, also affect the learning process of the  AI ​​systems by machine / deep learning.

In other words, we are all part of this process, whether we realize it or not, and this raises a number of issues and problems related to power, control, stereotypes, manipulation and threats. The discussion is difficult and complex because what comes to help people in their better life and personal development often has the opposite effect. It becomes an instrument in the hands of those who hold power and develop structures of subordination. China is a leader in this process, and the control and standardization measures it implements on its citizens are, to put it mildly, frightening. An example of how surveillance and control systems have been put in place in Xinjiang Province, where residents are classified into categories in order to acquire different rights to access information, places and services.

Obviously, this affects basic human rights and raises alarms about the risk of digital despotism as well as various forms of discrimination. You may say that what happens in China shall not threaten us, and you are right because here, especially in the EU, we have regulations for this. According to the EC, the criteria that trustworthy AI must meet are “lawful - respecting all applicable laws and regulations; ethical - respecting ethical principles and values, robust - both from a technical perspective while taking into account its social environment ”.

However, these are prescriptions and recommendations, and in real life we ​​often see that things are not quite like that. But should we be afraid of AI? In fact, many experts involved in the debate prefer not to call it that. According to them, for now we are talking about algorithmic agents and systems that are based primarily on statistics. In other words, it is an extremely complex data processing system that is trained to make decisions based on given indicators. And here comes the other big ethical issue, because in the process of their training, many side, but important factors interfere, such as the intellect, sensitivity and ethical norms of the people who do it, which are determined by their social and cultural affiliation (and this is the basis of what we call biases).

Here we refer to what scientists call "social intelligence", that for us, the humans, is determined by our interrelation with the environment and people. This is impossible for the AI, because it cannot determine these norms of behavior itself, and they are set for it from the outside. AI does not have the ability to make a causal connection, as well as to decide for itself what is good and bad, right or wrong, pleasant or unpleasant… 

An example of this is the story of the Microsoft’s chatbot Tay, which in less than a day within the social network Twitter from the polite and pleasant to talk bot became an outspoken hater and racist and began to make scandalous statements on the web which made the company quickly withdraw it.
This is how we open „the Pandora's Box“ in the debate about how something that is initiated to be useful and objective can be easily manipulated and threatened. Because just think about it, these types of algorithms are embedded in the work of the police, the judiciary and other power structures that affect each of us. Based on the statistics these algorithms would say that black people are potential criminals, or that women are …sexy chicks. This brings us back to the topic of biases and all the potential problem areas that come from the performance models set in these algorithms. For example, if they have to describe a common man, they usually represent a middle-aged white man in a business suit, while the female model is a young sexy chick in skimpy clothes. This comes precisely from the fact that the developers of these systems are white men who transmit this type of perception to algorithms.
Again, we should not blame these algorithms, they are simply a product of our notions and interactions. But what happens if they become the engine of their actions? What would they create if they have free access to data and generate content on their own? A possible answer to this is given by the project The Next Biennial Should Be Curated By a Machine by the artists duo UBERMORGEN, digital humanist Leonardo Impett, and curator Joasia Krysa. The B3 (NSCAM) software developed by the project team takes data from the Whitney Museum and the Liverpool Biennial databases and begins generating biographies of artists, writing curatorial statements and press releases.

This brings us to the topic of creativity and language, which are considered to be a typical human activity. Because it is creativity and writing that are the privileged activities for humans, that distinguish it from other animal species. Now, however, another species (inhuman) is emerging that "occupies" these activities and begins to write articles, compose musical works, paint and model virtual worlds with such mastery that it is difficult to be recognized by those created by humans. 
So will the machine be the threat to our world or this is us, as societies and civilization? Can the machine make better decisions than humans? Do we trust algorithms more than we trust people? What is happening in this pandemic time dominated by digital activities and media, when human interaction has remained limited? How will this affect our evolution as humanity? 
We will look for the answers to these and other similar questions in the process of realization of the EthicAI Labs project and we would like to hear various voices and points of view. So stay tuned…