Transformers, currently the state-of-the-art in natural language understanding (NLU) tasks, are prone to generate uncalibrated predictions or extreme probabilities, making the process of taking different decisions based on their output relatively difficult. In this paper we propose to build several inductive Venn–ABERS predictors (IVAP), which are guaranteed to be well calibrated under minimal assumptions, based on a selection of pre-trained transformers. We test their performance over a set of diverse NLU tasks and show that they are capable of producing well-calibrated probabilistic predictions that are uniformly spread over the [0,1] interval – all while retaining the original model's predictive accuracy.
|Published - 2022
|The 11th Symposium on Conformal and Probabilistic Prediction with Applications: COPA 2022 - Brighton, United Kingdom
Duration: 24 Aug 2022 → 26 Aug 2022
|The 11th Symposium on Conformal and Probabilistic Prediction with Applications: COPA 2022
|24/08/22 → 26/08/22