What can we learn about language from artificial intelligence? Analysing word meanings through the lenses of modern natural language models

Research output: ThesisDoctoral Thesis

121 Downloads (Pure)


The introduction of artificial neural networks has made it possible to use human language to interact with modern computer devices at various levels from speech recognition to machine translation. This thesis focuses on better understanding how word meanings are represented and deployed in written language through the lenses of modern natural language models. Specifically, this work exploits modern natural language models as a means of analysing contextual information contained in written language. The structure of this thesis is as follows. Chapter 1 provides a literature review of the representation of word meanings and their deployment in online sentence reading as well as a general introduction to computational models of word meanings. Chapter 2 outlines some of the methodological considerations that were undertaken whilst conducting this research and provides a detailed description of the language models employed. Chapters 3 and 4 investigate the extent to which word meanings occupy different contextual spaces in written language and whether this relates to the theoretical structure of a word’s ambiguity as represented in dictionaries. This work sets off by investigating the relation between semantic diversity and lexical ambiguity in Chapter 3. Following on this work, Chapter 4 focuses on artificial neural network models to understand whether word meanings are distributed in written language and to what extent contextualized word embeddings represent nuances of meanings. Finally, Chapter 5 exploits similar models to assess the extent to which low-level predictions of upcoming words contribute to the efficiency of skilled reading. Chapter 6 provides a general discussion of the overall findings and their relation to recent debates in the artificial intelligence community.
Original languageEnglish
Awarding Institution
  • Royal Holloway, University of London
  • Rastle, Kathy, Supervisor
  • Watkins, Chris, Supervisor
Award date1 Dec 2022
Publication statusUnpublished - 2022


  • Ambiguity
  • Language models
  • BERT
  • GPT-2
  • Eye Movements
  • Predictability

Cite this