Abstract
Ensuring that all classes of objects are detected with equal accuracy is essential in AI systems. For instance, being unable to identify any one class of objects could have fatal consequences in autonomous driving systems. Hence, ensuring the reliability of image recognition systems is crucial. This work addresses how to validate group fairness in image recognition software. We propose a distribution-aware fairness testing approach (called DISTROFAIR) that systematically exposes class-level fairness violations in image classifiers via a synergistic combination of out-of-distribution (OOD) testing and semantic-preserving image mutation. DISTROFAIR automatically learns the distribution (e.g., number/orientation) of objects in a set of images. Then it systematically mutates objects in the images to become OOD using three semantic-preserving image mutations – object deletion, object insertion and object rotation. We evaluate DISTROFAIR using two well-known datasets (CityScapes and MS-COCO) and three major, commercial image recognition software (namely, Amazon Rekognition, Google Cloud Vision and Azure Computer Vision). Results show that about 21% of images generated by DISTROFAIR reveal class-level fairness violations using either ground truth or metamorphic oracles. DISTROFAIR is up to 2.3× more effective than two main baselines, i.e., (a) an approach which focuses on generating images only within the distribution (ID) and (b) fairness analysis using only the original image dataset. We further observed that DISTROFAIR is efficient, it generates 460 images per hour, on average. Finally, we evaluate the semantic validity of our approach via a user study with 81 participants, using 30 real images and 30 corresponding mutated images generated by DISTROFAIR. We found that images generated by DISTROFAIR are 80% as realistic as real-world images.
Original language | English |
---|---|
Article number | 112090 |
Number of pages | 16 |
Journal | Journal of Systems and Software |
Volume | 215 |
Early online date | 27 May 2024 |
DOIs | |
Publication status | Published - Sept 2024 |
Keywords
- Software testing
- Fairness testing
- Computer vision