Input Talks

Data Engineering Stage - Input Talks with a Focus on Bias and Inclusion

Abstract microscopic photography of a Graphics Processing Unit resembling a satellite image of a big city
© Ezequiel Hyon

Tue, 27.09.2022 9:30 AM - 11:00 AM

Online


Schedule


09:30 Ute Schmid (Otto-Friedrich-Universität Bamberg): Designing Explainable AI Systems to Reduce Automation Bias and Increase Justified Trust in AI
Whether AI systems and applications fulfill certain ethical standards depends on the way how they are embedded and regulated but also on the availability of suitable AI methods. Characteristics such as transparency, explainability, fairness, and robustness can only be obtained via appropriate AI methods. In the talk, I will first introduce fundamental properties of machine learning, including current data-intensive deep learning methods. I will highlight advantages as well as inherent problems of machine learning (blackbox models, unfair biases, lack of robustness). Finally, so-called third wave of AI methods -- explanatory, interactive, and hybrid approaches to machine learning -- will be introduced. 

09:45 Q&A


09:55 Ruben Bach (University of Mannheim) and Christoph Kern (LMU Munich), Caius-Project: When Small Decisions Have Big Impact: Fairness Implications of Algorithmic Profiling of Jobseekers
Algorithmic profiling is increasingly used in the public sector to support the allocation of limited public resources. For example, in criminal justice systems algorithms inform the allocation of intervention and supervision resources, child protection services use algorithms to target risky cases and to allocate resources such as home inspections to identify and control health hazards, immigration and border control use algorithms to filter and sort applicants seeking residence in the country, and Public Employment Services use algorithms to identify job-seekers who may find it difficult to resume work and to allocate support programs to them. However, concerns are raised that profiling tools may suggest unfair decisions and thereby cause (unintended) discrimination. To date, empirical evaluations of such potential side-effects are rare. Using algorithm-driven profiling of jobseekers as an empirical example, we illustrate how different modeling decisions in a typical data science pipeline may have very different fairness implications. We highlight how fairness audits, statistical techniques as well as social science methodology can help to identify and mitigate biases and argue that a joint effort is needed to promote fairness in algorithmic profiling.

10:10 Q&A


10:20 Atoosa Kasirzadeh (University of Edinburgh): Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy
Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as "algorithmic fairness" aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by - or substantively rooted in - ideals of distributive justice, as formulated by political and legal philosophers. The perspectives of feminist political philosophers on social justice, by contrast, have been largely neglected. Some feminist philosophers have criticized the local scope of the paradigm of distributive justice and have proposed corrective amendments to surmount its limitations. The present talk brings some key insights of feminist political philosophy to algorithmic fairness. The paper has three goals. First, I show that algorithmic fairness does not accommodate structural injustices in its current scope. Second, I defend the relevance of structural injustices - as pioneered in the contemporary philosophical literature by Iris Marion Young - to algorithmic fairness. Third, I take some steps in developing the paradigm of `responsible algorithmic fairness' to correct for errors in the current scope and implementation of algorithmic fairness. I close by some reflections of directions for future research.

10.35 Q&A and discussion