Schnelleinstieg:

Direkt zum Inhalt springen (Alt 1) Direkt zur Hauptnavigation springen (Alt 2)
Logo Goethe-Institut

Max Mueller Bhavan | Indien

Einseitigkeit, Stereotype und Verzerrungen

A person standing in the midst of screens
© Unsplash

Too often, cultural and social biases are reflected in the digital space. We identified some of the key biases that must be addressed before artificial intelligence is trained.

Cultural / Systemic bias

Perpetuating historical inequalities, erasing or misrepresenting marginalised cultures

AI systems often reflect dominant cultural narratives, while invisibilising others. This shows up in skewed outputs, culturally inappropriate visual generations, erasure of experiences, and lack of contextual nuance. With AI tools, we encounter "new" forms of discrimination based on biases from an intersection of dimensions/characteristics, which often requires additional detection tools and/or other mitigation mechanisms. An example of this is Lensa AI, which shows a stark divide in creating images of men vs. women.

Bias from usage imbalance

How unequal distribution of education and economic resources leads to underrepresented users

The people building and using AI tools often come from the unidirectionality of tech development and implementation due to the privilege and closed access to resources of global north contexts. This creates gaps in representation, and disempowers those without equal access to tools, knowledge, or creative infrastructure.

Bias in training data

Reinforcing existing stereotypes and exclusions

AI systems trained on incomplete, only English or Roman script languages or biased datasets can replicate harmful tropes, exclude entire communities, create biased taxonomies and models or misrepresent people’s identities. If the collection and management of data sets can be done as a larger equitable and just activity we can avoid many of these issues.

Bias in AI tool design

Structural and design choices embed assumptions of a limited set of perspectives

Everything from interface design to output shapes how people engage with AI. Design choices often reflect narrow worldviews or commercial priorities rather than the diversity of users or uses.

Bias in consent & ownership

Unclear or absent consent mechanisms, extraction without credit

Training datasets are often scraped without consent, attribution, or compensation — especially from artists and cultural practitioners. This erodes trust and perpetuates extractive models of knowledge production.

Bias due to power imbalance in governing AI tools

Limiting the diversity of decision-making and accountability

The organisations and individuals shaping AI governance systems often lack cultural, regional, or disciplinary diversity. Without accountability structures, small, self-selecting groups of those in power make decisions that affect everyone else.

Top