On Measurement Validity and Language Models: Increasing Validity and Decreasing Bias with Instructions

Moritz Laurer, Wouter van Atteveldt, Andreu Casas, Kasper Welbers

Research output: Contribution to journalArticlepeer-review

Abstract

Language models like BERT or GPT are becoming increasingly popular measurement tools, but are the measurements they produce valid? Literature suggests that there is still a relevant gap between the ambitions of computational text analysis methods and the validity of their outputs. One prominent threat to validity are hidden biases in the training data, where models learn group-specific language patterns instead of the concept researchers want to measure. This paper investigates to what extent these biases impact the validity of measurements created with language models. We conduct a comparative analysis across nine group types in four datasets with three types of classification models, focusing on the robustness of models against biases and on the validity of their outputs. While we find that all types of models learn biases, the effects on validity are surprisingly small. In particular when models receive instructions as an additional input, they become more robust against biases from the fine-tuning data and produce more valid measurements across different groups. An instruction-based model (BERT-NLI) sees its average test-set performance decrease by only 0.4% F1 macro when trained on biased data and its error probability on groups it has not seen during training increases only by 0.8%.
Original languageEnglish
Number of pages17
JournalCommunication Methods and Measures
Early online date29 Jul 2024
DOIs
Publication statusE-pub ahead of print - 29 Jul 2024

Keywords

  • text-as-data
  • bias
  • validity
  • supervised machine learning
  • language models

Cite this