Training Data and Rationality

Konstantinos Mersinas, Theresa Sobb, Char Sample, Jonathan Bakdash, David Ormrod

Research output: Chapter in Book/Report/Conference proceedingConference contribution

80 Downloads (Pure)

Abstract

Human decision-making includes emotions, biases, heuristics within environmental context, and does not generally comply with rational decision-making (e.g. utility maximization). Artificial Intelligence (AI) algorithms rely on training data for analysis, based upon observed cybersecurity incidents. Rarity of attack data and the existence of information asymmetries between attackers and defenders creates uncertainty in estimating attack frequencies and can reduce the reliability of data. These characteristics can lead to a posteriori justification of attacker/defender choices, deeming successful actions as rational, and vice versa.
Data analysis also influences the fidelity of AI output. The need for broad specification analysis often leads to analysing large amounts of data and using increasingly complex models. The fuzzy definition of rationality creates an opening for exploitation. Data volumes and model complexity may consequently reduce the usefulness of predictions in this fuzzy environment.

AI lacks adaptive human characteristics like “common sense” that make humans situationally adaptive. AI relies on learning and prioritizing likely events; however, lower probability ‘common sense’ defying can, upon success, reorder probabilistic outcomes (Gersham, Horovitz and Tenenbaum 2015). Such characteristics are not easily quantifiable in AI environments and can therefore decrease the efficacy of machine learning (ML) classification processes. This research effort differs from traditional Adversarial ML goals; we expose inherent flaws in “good” historical data, based on successful lessons learned. In such cases, “rational” ML optimization potentially produces misleading results, often unlikely to trick humans, reflecting the potential limitations of AI and pattern recognition given the prior and available data. The exposure is cognitive not algorithmic.

Considering the aforementioned factors, propagated biases and “irrational” choices, if combined with issues inherent in data analysis and current AI capability limitations, weaken the predictive power of AI. These problems provide reason to consider ”rationality” through the lens of AI-reliant cybersecurity, potentially weakening security posture. Psychological aspects heavily influence the way humans approach, understand, and act to solve problems. Consequently, human-originated historical data in cybersecurity may be of reduced utility for AI, due to a lack of contextual information.

This paper provides an introductory overview and review of AI in the context of human decision making and cybersecurity. We investigate the notion of “rationality” and types of AI approaches in cybersecurity, discussing the differences between human and AI decision-making. We identify potential conflicts between human decision-making biases and AI data analysis.
Original languageEnglish
Title of host publicationEuropean Conference on the Impact of Artificial Intelligence and Robotics
EditorsPaul Griffiths, Mitt Nowshade Kabir
Place of PublicationOxford, UK
PublisherAcademic Conferences and Publishing International Limited
Pages225-232
Number of pages8
ISBN (Electronic)9781912764440
ISBN (Print)9781912764457
DOIs
Publication statusPublished - Oct 2019

Keywords

  • Artificial Intelligence
  • Information Security
  • Cyber Security
  • Decision Making
  • Rationality
  • biases
  • Biases in data

Cite this