Submitted by Uljana Feest (Leibniz University Hannover).
Bias and Discrimination in Algorithmic Decision-Making
Issues in Explainable AI #3
October 8/9, 2021, Hannover, Germany
- Emily Sullivan (TU Eindhoven)
- Christian Heinze (U Heidelberg)
- Markus Langer (U Saarland)
- Kasper Lippert-Rasmussen (U Aarhus)
CALL FOR PAPERS
Algorithmic predictions are increasingly used to inform, guide, justify or even replace human decision-making in many areas of society. However, there is growing evidence that algorithmic predictions are often shaped by bias and discrimination and thus threaten to have detrimental effects on certain social groups and on social cohesion in general.
We invite researchers to present their work and discuss their ideas concerning these challenges at our workshop in Hannover. The workshop will be held in person, but depending on the development of the pandemic it may be shifted online.
Contributions from various disciplines, including epistemology, ethics, law, sociology, psychology, and computer science, are welcome.
Presentations (20 minutes) may include, but are not limited to, research on the following topics:
- Conceptual issues of algorithmic bias and discrimination (epistemology of computer bias vs. human bias; meaning and classification of algorithmic discrimination; psychological, sociological, legal, technical frameworks for capturing algorithmic discrimination etc.)
- Normative tenets of dealing with algorithmic bias and discrimination (relation to theories of social fairness and political justice; stereotype threat, affirmative action and their application to algorithmic discrimination; connections of discrimination to issues such as AI explainability, AI transparency, AI autonomy etc.)
- Analyses of types of discrimination, fields of application, or kinds of implementation (discrimination with regard to gender, race, religion, age, health; discrimination in job hiring, credit granting, predictive policing, advertisement selection, recommendation systems; challenges and solutions for supervised learning, unsupervised learning, reinforcement learning etc.)
Submissions: Anonymized applications must be submitted as written abstracts (maximum 500 words) through easy chair (Link: https://easychair.org/conferences/?conf=bad2021). The deadline for submissions is April 30, 2021. Notifications concerning participation will be issued by June 30, 2021.
Organization: The workshop is organized by the interdisciplinary project “Bias and Discrimination in Big Data and Algorithmic Processing – BIAS” (www.bias-project.org), funded by Volkswagen Foundation. It is part of the workshop series “Issues in Explainable AI” (www.explainable-intelligent.systems)
Contact: Prof. Uljana Feest, Leibniz University Hannover, Institute of Philosophy, Im Moore 21, D-30167 Hannover, Germany. E-mail: firstname.lastname@example.org