A First Look at Fairness of Machine Learning Based Code Reviewer Recommendation

Date

2024-07-18

Authors

Mohajer, Mohammad Mahdi

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The fairness of machine learning (ML) approaches is critical to the reliability of modern artificial intelligence systems. Despite extensive study on this topic, the fairness of ML models in the software engineering (SE) domain has yet to be well explored. As a result, many ML-powered software systems, particularly those utilized in the software engineering community, continue to be prone to fairness issues. Taking one of the typical SE tasks, i.e., code reviewer recommendation, as a subject, this work conducts the first study toward investigating the fairness of ML applications in the SE domain, explicitly focusing on the code reviewer recommendation task. Our empirical study demonstrates that current state-of-the-art ML-based code reviewer recommendation techniques exhibit unfairness and discriminating behaviors. Specifically, male reviewers get, on average, 7.25% more recommendations than female code reviewers compared to their distribution in the reviewer set. This work also discusses why the studied ML-based code reviewer recommendation systems are unfair and provides solutions to mitigate the unfairness. For instance, these techniques may recommend male reviewers at a significantly higher rate than female reviewers in a discriminatory manner. Our study further indicates that existing mitigation methods can significantly enhance fairness in projects with a similar distribution of protected and privileged groups. Still, their effectiveness in improving fairness on imbalanced or skewed data is limited. Eventually, we suggest a solution to overcome the drawbacks of existing mitigation techniques and tackle bias in imbalanced or skewed datasets.

Description

Keywords

Computer science, Computer engineering

Citation