Professor Alessio Sancetta

Personal profile

Interests: Empirical Finance, forecastiing asset prices, estimation and inference for high frequency financial data, market manipulation and abuse.  


Media Coverage: A recent co-authored paper on Price Drift before US Macroeconomic News has drawn the attention of the financial media such as Bloomberg, CNN, FT, etc.

A summary can be foud here:

The link to the paper can be found here:



Other work

I am Co-Founder and Co-Director of the Centre for Robust Inference in a Digital Economy (RIDE); Director of the MSc Finance, Co-Director of the MSc Computational Finance (jointly with the Computer Science Department) 

Additional information regarding industry experience and CV (pdf format) can be found in my Linkedin profile:



Research interests

1. High dimensional econometric and statistical inference under minimal/robust assumptions.

2. Modelling irregular time series for financial prices and related quantities.

3. Applications to market abuse, manipulation and information leakage.

4. Computational methods for econometric and statistical estimation for large datasets (time series and panel data).


1. I am interested in modelling nonlinear dependence and estimation of models with rich data types, possibly in the presence of a large number of variables to estimate. I have addressed the following questions over the years.

How can we capture multivariate dependence structures beyond the paradigm of the multivariate normal distribution? How can we explain changes in dependence during large market losses? How can we choose amongst many econometrics models under minimal conditions? If no model is the true model, how con we combine many wrong models in order to enhance our predictions? How can we predict functional data such as yield curves and implied volatilities? How can we estimate models with many variables, possibly more than the sample size?

Keywords: copula function, model selection, model aggregation, high dimensional covariance matrices, sparse functional data prediction, predictions with many variables.


2. Because of my past high frequency trading experience, I have had access to large datasets of high frequency data that comprise of all quotes and trades for many financial instruments. I have spent considerable time analysing such datasets and devising models and estimation techniques for the understanding of high frequency markets. Within this strand of research I have addressed the following problems.

How can we model the probability of a buy trade arrival using all the order book information including the ones from other financial instruments? How can we estimate such models in a flexible way under minimal/robust assumptions? How can infer how the status of the order book affects the probability and intensity of buy (or sell) trades?

Keywords: intensity model, Hawkes model with many covariates, predictive sequential inference, flexible nonlinear estimation.


3. Because of my past experience working on the trading floor various financial institutions, I have developed an interest in market abuse, manipulation and information leakage. I have worked on prediction of economic news announcements and their relation to release procedures and price movements as a result of information leakage. I have also worked on the problem of understanding how high frequency trader place orders in a way that can misrepresent the supply and demand of traded instruments so to execute trades in their favour.

Within this context, I have addressed the following questions.

Can we infer informed trading ahead of economic announcements? Does the procedure used to release announcements avoid leakage of information and illegal profitable trading ahead of these releases? What is the economic value of this informed trading? Can high frequency traders manipulate the order book in order obtain an unfair advantage? What is the simplest way to manipulate the order book? How can we identify if a market is prone to market abuse due to its market microstructure specific features?

Keywords: information leakage, economic announcements release procedures, spoofing.


4. Some of the above problems require to deal with large datasets either in the form of time series or panel data. As a byproduct of the above research, I have worked on devising and coding flexible algorithms for analysis and estimation using large datasets with thousands of variables and possibly hundred of million data points.

Keywords: constrained estimation, greedy algorithms, functional restrictions, large datasets.




View all (26) »

ID: 6376816