Einseitigkeit, Stereotype und Verzerrungen
© Unsplash
Too often, cultural and social biases are reflected in the digital space. We identified some of the key biases that must be addressed before artificial intelligence is trained.
Cultural / Systemic bias
Perpetuating historical inequalities, erasing or misrepresenting marginalised culturesAI systems often reflect dominant cultural narratives, while invisibilising others. This shows up in skewed outputs, culturally inappropriate visual generations, erasure of experiences, and lack of contextual nuance. With AI tools, we encounter "new" forms of discrimination based on biases from an intersection of dimensions/characteristics, which often requires additional detection tools and/or other mitigation mechanisms. An example of this is Lensa AI, which shows a stark divide in creating images of men vs. women.
Bias from usage imbalance
How unequal distribution of education and economic resources leads to underrepresented usersThe people building and using AI tools often come from the unidirectionality of tech development and implementation due to the privilege and closed access to resources of global north contexts. This creates gaps in representation, and disempowers those without equal access to tools, knowledge, or creative infrastructure.
Bias in training data
Reinforcing existing stereotypes and exclusionsAI systems trained on incomplete, only English or Roman script languages or biased datasets can replicate harmful tropes, exclude entire communities, create biased taxonomies and models or misrepresent people’s identities. If the collection and management of data sets can be done as a larger equitable and just activity we can avoid many of these issues.
Bias in AI tool design
Structural and design choices embed assumptions of a limited set of perspectivesEverything from interface design to output shapes how people engage with AI. Design choices often reflect narrow worldviews or commercial priorities rather than the diversity of users or uses.
Bias in consent & ownership
Unclear or absent consent mechanisms, extraction without creditTraining datasets are often scraped without consent, attribution, or compensation — especially from artists and cultural practitioners. This erodes trust and perpetuates extractive models of knowledge production.
Bias due to power imbalance in governing AI tools
Limiting the diversity of decision-making and accountabilityThe organisations and individuals shaping AI governance systems often lack cultural, regional, or disciplinary diversity. Without accountability structures, small, self-selecting groups of those in power make decisions that affect everyone else.