Training Data and Rationality. / Mersinas, Konstantinos; Sobb, Theresa; Sample, Char; Bakdash, Jonathan; Ormrod, David.

European Conference on the Impact of Artificial Intelligence and Robotics. ed. / Paul Griffiths; Mitt Nowshade Kabir. Oxford, UK : Academic Conferences and Publishing International Limited, 2019. p. 225-232.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Published

Standard

Training Data and Rationality. / Mersinas, Konstantinos; Sobb, Theresa; Sample, Char; Bakdash, Jonathan; Ormrod, David.

European Conference on the Impact of Artificial Intelligence and Robotics. ed. / Paul Griffiths; Mitt Nowshade Kabir. Oxford, UK : Academic Conferences and Publishing International Limited, 2019. p. 225-232.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Mersinas, K, Sobb, T, Sample, C, Bakdash, J & Ormrod, D 2019, Training Data and Rationality. in P Griffiths & MN Kabir (eds), European Conference on the Impact of Artificial Intelligence and Robotics. Academic Conferences and Publishing International Limited, Oxford, UK, pp. 225-232. https://doi.org/10.34190/ECIAIR.19.075

APA

Mersinas, K., Sobb, T., Sample, C., Bakdash, J., & Ormrod, D. (2019). Training Data and Rationality. In P. Griffiths, & M. N. Kabir (Eds.), European Conference on the Impact of Artificial Intelligence and Robotics (pp. 225-232). Academic Conferences and Publishing International Limited. https://doi.org/10.34190/ECIAIR.19.075

Vancouver

Mersinas K, Sobb T, Sample C, Bakdash J, Ormrod D. Training Data and Rationality. In Griffiths P, Kabir MN, editors, European Conference on the Impact of Artificial Intelligence and Robotics. Oxford, UK: Academic Conferences and Publishing International Limited. 2019. p. 225-232 https://doi.org/10.34190/ECIAIR.19.075

Author

Mersinas, Konstantinos ; Sobb, Theresa ; Sample, Char ; Bakdash, Jonathan ; Ormrod, David. / Training Data and Rationality. European Conference on the Impact of Artificial Intelligence and Robotics. editor / Paul Griffiths ; Mitt Nowshade Kabir. Oxford, UK : Academic Conferences and Publishing International Limited, 2019. pp. 225-232

BibTeX

@inproceedings{8daf5f35a9ae4ddc9a3dd502bbb4e88f,
title = "Training Data and Rationality",
abstract = "Human decision-making includes emotions, biases, heuristics within environmental context, and does not generally comply with rational decision-making (e.g. utility maximization). Artificial Intelligence (AI) algorithms rely on training data for analysis, based upon observed cybersecurity incidents. Rarity of attack data and the existence of information asymmetries between attackers and defenders creates uncertainty in estimating attack frequencies and can reduce the reliability of data. These characteristics can lead to a posteriori justification of attacker/defender choices, deeming successful actions as rational, and vice versa.Data analysis also influences the fidelity of AI output. The need for broad specification analysis often leads to analysing large amounts of data and using increasingly complex models. The fuzzy definition of rationality creates an opening for exploitation. Data volumes and model complexity may consequently reduce the usefulness of predictions in this fuzzy environment.AI lacks adaptive human characteristics like “common sense” that make humans situationally adaptive. AI relies on learning and prioritizing likely events; however, lower probability {\textquoteleft}common sense{\textquoteright} defying can, upon success, reorder probabilistic outcomes (Gersham, Horovitz and Tenenbaum 2015). Such characteristics are not easily quantifiable in AI environments and can therefore decrease the efficacy of machine learning (ML) classification processes. This research effort differs from traditional Adversarial ML goals; we expose inherent flaws in “good” historical data, based on successful lessons learned. In such cases, “rational” ML optimization potentially produces misleading results, often unlikely to trick humans, reflecting the potential limitations of AI and pattern recognition given the prior and available data. The exposure is cognitive not algorithmic.Considering the aforementioned factors, propagated biases and “irrational” choices, if combined with issues inherent in data analysis and current AI capability limitations, weaken the predictive power of AI. These problems provide reason to consider ”rationality” through the lens of AI-reliant cybersecurity, potentially weakening security posture. Psychological aspects heavily influence the way humans approach, understand, and act to solve problems. Consequently, human-originated historical data in cybersecurity may be of reduced utility for AI, due to a lack of contextual information.This paper provides an introductory overview and review of AI in the context of human decision making and cybersecurity. We investigate the notion of “rationality” and types of AI approaches in cybersecurity, discussing the differences between human and AI decision-making. We identify potential conflicts between human decision-making biases and AI data analysis.",
keywords = "Artificial Intelligence, Information Security, Cyber Security, Decision Making, Rationality, biases, Biases in data",
author = "Konstantinos Mersinas and Theresa Sobb and Char Sample and Jonathan Bakdash and David Ormrod",
year = "2019",
month = oct,
doi = "10.34190/ECIAIR.19.075",
language = "English",
isbn = "9781912764457",
pages = "225--232",
editor = "Paul Griffiths and Kabir, {Mitt Nowshade}",
booktitle = "European Conference on the Impact of Artificial Intelligence and Robotics",
publisher = "Academic Conferences and Publishing International Limited",

}

RIS

TY - GEN

T1 - Training Data and Rationality

AU - Mersinas, Konstantinos

AU - Sobb, Theresa

AU - Sample, Char

AU - Bakdash, Jonathan

AU - Ormrod, David

PY - 2019/10

Y1 - 2019/10

N2 - Human decision-making includes emotions, biases, heuristics within environmental context, and does not generally comply with rational decision-making (e.g. utility maximization). Artificial Intelligence (AI) algorithms rely on training data for analysis, based upon observed cybersecurity incidents. Rarity of attack data and the existence of information asymmetries between attackers and defenders creates uncertainty in estimating attack frequencies and can reduce the reliability of data. These characteristics can lead to a posteriori justification of attacker/defender choices, deeming successful actions as rational, and vice versa.Data analysis also influences the fidelity of AI output. The need for broad specification analysis often leads to analysing large amounts of data and using increasingly complex models. The fuzzy definition of rationality creates an opening for exploitation. Data volumes and model complexity may consequently reduce the usefulness of predictions in this fuzzy environment.AI lacks adaptive human characteristics like “common sense” that make humans situationally adaptive. AI relies on learning and prioritizing likely events; however, lower probability ‘common sense’ defying can, upon success, reorder probabilistic outcomes (Gersham, Horovitz and Tenenbaum 2015). Such characteristics are not easily quantifiable in AI environments and can therefore decrease the efficacy of machine learning (ML) classification processes. This research effort differs from traditional Adversarial ML goals; we expose inherent flaws in “good” historical data, based on successful lessons learned. In such cases, “rational” ML optimization potentially produces misleading results, often unlikely to trick humans, reflecting the potential limitations of AI and pattern recognition given the prior and available data. The exposure is cognitive not algorithmic.Considering the aforementioned factors, propagated biases and “irrational” choices, if combined with issues inherent in data analysis and current AI capability limitations, weaken the predictive power of AI. These problems provide reason to consider ”rationality” through the lens of AI-reliant cybersecurity, potentially weakening security posture. Psychological aspects heavily influence the way humans approach, understand, and act to solve problems. Consequently, human-originated historical data in cybersecurity may be of reduced utility for AI, due to a lack of contextual information.This paper provides an introductory overview and review of AI in the context of human decision making and cybersecurity. We investigate the notion of “rationality” and types of AI approaches in cybersecurity, discussing the differences between human and AI decision-making. We identify potential conflicts between human decision-making biases and AI data analysis.

AB - Human decision-making includes emotions, biases, heuristics within environmental context, and does not generally comply with rational decision-making (e.g. utility maximization). Artificial Intelligence (AI) algorithms rely on training data for analysis, based upon observed cybersecurity incidents. Rarity of attack data and the existence of information asymmetries between attackers and defenders creates uncertainty in estimating attack frequencies and can reduce the reliability of data. These characteristics can lead to a posteriori justification of attacker/defender choices, deeming successful actions as rational, and vice versa.Data analysis also influences the fidelity of AI output. The need for broad specification analysis often leads to analysing large amounts of data and using increasingly complex models. The fuzzy definition of rationality creates an opening for exploitation. Data volumes and model complexity may consequently reduce the usefulness of predictions in this fuzzy environment.AI lacks adaptive human characteristics like “common sense” that make humans situationally adaptive. AI relies on learning and prioritizing likely events; however, lower probability ‘common sense’ defying can, upon success, reorder probabilistic outcomes (Gersham, Horovitz and Tenenbaum 2015). Such characteristics are not easily quantifiable in AI environments and can therefore decrease the efficacy of machine learning (ML) classification processes. This research effort differs from traditional Adversarial ML goals; we expose inherent flaws in “good” historical data, based on successful lessons learned. In such cases, “rational” ML optimization potentially produces misleading results, often unlikely to trick humans, reflecting the potential limitations of AI and pattern recognition given the prior and available data. The exposure is cognitive not algorithmic.Considering the aforementioned factors, propagated biases and “irrational” choices, if combined with issues inherent in data analysis and current AI capability limitations, weaken the predictive power of AI. These problems provide reason to consider ”rationality” through the lens of AI-reliant cybersecurity, potentially weakening security posture. Psychological aspects heavily influence the way humans approach, understand, and act to solve problems. Consequently, human-originated historical data in cybersecurity may be of reduced utility for AI, due to a lack of contextual information.This paper provides an introductory overview and review of AI in the context of human decision making and cybersecurity. We investigate the notion of “rationality” and types of AI approaches in cybersecurity, discussing the differences between human and AI decision-making. We identify potential conflicts between human decision-making biases and AI data analysis.

KW - Artificial Intelligence

KW - Information Security

KW - Cyber Security

KW - Decision Making

KW - Rationality

KW - biases

KW - Biases in data

U2 - 10.34190/ECIAIR.19.075

DO - 10.34190/ECIAIR.19.075

M3 - Conference contribution

SN - 9781912764457

SP - 225

EP - 232

BT - European Conference on the Impact of Artificial Intelligence and Robotics

A2 - Griffiths, Paul

A2 - Kabir, Mitt Nowshade

PB - Academic Conferences and Publishing International Limited

CY - Oxford, UK

ER -