Miss Kyra Mozley

Supervised by

Personal profile

My past research has focused on AI, computer vision, machine learning, and their uses for cyber security. I am currently focusing on creating methods to detect deepfake videos to make the web safer. However, I have a wide range of research interests, including (but not limited to) language models, de-anonymisation, cryptocurrencies and NFTs, and network security. 

I can be found across the web at:


I graduated from the University of Cambridge in 2020 with a First Class BA (Hons) in Computer Science. For my undergraduate dissertation (supervised by Ildiko Pete), I researched Machine Learning methods to detect Network Attacks. I investigated both supervised and unsupervised models on the CICIDS2017 dataset. I then selected the best performing model (Random Forest) to implement into my own Intrusion Detection System built using Python. I then performed final testing by simulating network attacks myself using tools available in Kali Linux. The detection occurred in real-time and also included several attacks that had not been seen in the training phase to test the model's generalisation. 


Since joining the CDT, I conducted a mini-research project on the security of E-voting, approaching the literature from a non-technical perspective, including usability challenges, issues around developing and maintaining societal trust in the system, and how a shift to e-voting could threaten the voters 'everyday' security as it disrupts their routines and rituals of voting. This project was interesting as it took me out of my comfort zone, embodying the interdisciplinary approach of the CDT by focusing not on how e-voting systems may be cryptographically secure but by how it affects the security of the human using them.


The last part of my taught first-year required me to conduct a research project over the summer; I chose to work on detecting deepfake videos. First, supervised by Jassim Happa I used reasoning grounded in psychology and affective computing to select relevant audio-visual features believed to convey emotion. Then, using Facebook's Deepfake Detection Challenge Dataset, I trained a CNN Bi-LSTM classifier with an early additive fusion between the visual and audio modalities to improve upon state-of-the-art by 5.6%.


Now I'm in my second year and beginning my PhD research and have joined the S3 lab. I am still actively working on creating methods to detect deepfake videos.   

ID: 39115166