That wasn’t me! – Who is responsible for AI-supported decisions?

00 Blog Schmidt KI Ethik 3 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© Good Studio/ stock.adobe.com

Anna takes her little brother’s toy, and he starts to scream. Their father comes into the room and immediately understands the situation. “Anna, you must not take your brother’s toy!” he says accusingly, “Apologize to him!” Reluctantly, Anna returns the toy and apologizes.

In 2018, Volkswagen and Audi admitted their responsibility for supervisory failures in the emissions scandal and paid hefty fines related to their manipulation of emission values.

These examples illustrate the relevance of attributing responsibility in both private and public contexts. It’s important for us that people take responsibility for their actions and face consequences when necessary. We hold each other accountable for misconduct and expect admissions of guilt and reparations when someone does something wrong. But what happens to this practice when decisions are made not by humans, but rather by AI systems?

The problem of responsibility gaps

Imagine the HR department of a large company that has fully automated its recruitment process. An autonomous AI recruitment system receives all the application information for a specific position. It independently filters out unsuitable applications. Suppose the system, based on certain group characteristics—such as race and gender—wrongly classifies someone as unsuitable for the advertised position and discards their application. Thus, the recruitment system exhibits bias and discriminates against, for example, a Black applicant.

Humans can also be biased against members of certain social groups—recruiters might also have rejected the Black applicant due to prejudice. However, there’s a significant difference: if a recruiter discriminates against an applicant, they are clearly responsible. We can blame the person or even take legal action against them or the company. But who is to blame when the fully automated recruitment system discriminates against the applicant? In the philosophical debate about responsibility gaps, it is argued that in some cases of this kind, no one is responsible anymore. There’s no person making the discriminatory decision, so no one can or should be held accountable.

Indirect responsibility

But wait! Isn’t it obvious that the company using the discriminatory system for recruitment is responsible? Alternatively, shouldn’t the developers of the system bear responsibility since they designed it poorly, allowing it to discriminate?

In some cases, responsibility gaps can certainly be closed this way. Operators or developers of a system may not be directly responsible for the discriminatory decision, but at least indirectly responsible. They are accountable for having used or developed a discriminatory AI system.

The problem of many hands

Unfortunately, this line of argument doesn’t always work. Some cases present a “problem of many hands”: in complex situations with many parties involved, it’s often impossible to identify who is truly to blame for a bad outcome. There are too many involved parties to determine who is truly at fault. One possible reason is that due to the complexity of the overall situation, it may not have been foreseeable to the parties involved that their individual actions would contribute to a bad outcome.

In the case of the fully automated recruitment system, we have many participants and a complex overall situation. It’s possible that the developers did everything to exclude bias, such as testing the system for discriminatory decisions (and finding none). Perhaps the company uses the system because it was certified as bias-free by a trusted, independent organization. Despite these precautions, bias may still exist, but only manifest in rare situations, such as in our example through a combination of group affiliations like gender and race. Since algorithmic bias is widespread and it is unclear whether we can always detect it, this is a serious issue. In such a case, it’s challenging to hold the developers or operators even indirectly responsible for the discriminatory decision of the recruitment system. The problem of the responsibility gap persists.

Enter: human in the loop

A better solution is to organize the recruitment process differently. Decisions of great importance should not be fully automated in many cases but require a human in the loop. AI systems then only serve as decision support, providing recommendations on which a human decides. This person then bears responsibility for the AI-supported decision.

Can this proposal solve all problems? Let’s assume that recruiters receive recommendations from the system without knowing why it makes those recommendations and whether they are based on a hidden bias in the system. We cannot hold individuals responsible for decisions if they were unaware of their potential moral implications.

We need explainable AI

To enable responsibility, AI systems should be able to explain their recommendations in such cases. If the decision support system’s recommendation against an applicant is explained by the fact that they are a Black woman, recruiters in the loop can very well know that such a decision would be discriminatory. If the automated recommendation is still followed and the applicant is not invited, the recruiters understand the moral significance of their decision, and we can hold them accountable for it.

These considerations highlight the relevance of explainable AI for solving the problem of responsibility gaps. This raises questions about how we can first obtain explanations for a system’s outputs and second, which types of explanations are suitable to truly enable, for example, recruiters to make responsible decisions. The problem of the responsibility gap thus opens up space for exciting interdisciplinary research projects at the intersection of computer science and philosophy.

Prof. Dr. Eva Schmidt

Eva Schmidt is a Professor of theoretical philosophy at the Department of Philosophy and Political Science. She works in epistemology and the philosophy of mind and action. She is a PI of the project Explainable Intelligent Systems (EIS), funded by the Volkswagen Foundation. Previously, she worked at the University of Zurich in the project The Structure and Development of Understanding Actions and Reasons. She has published in journals such as […]

More blog posts