QuantCon NYC 2018 review

20180430 Fawce

John Fawcett, founder of Quantopian with Rich Newman, FactSet

 

A big reason people visit New York is for the burgers, right? You’ve got Shake Shack, Minetta Tavern, the Burger Joint… I could construct quite a long list. OK, that’s probably just a reason which encourages me to visit New York! At the end of every April, there’s also another reason, namely QuantCon NYC, which is Quantopian’s annual conference. I end up speaking at quite a few conferences over the year, mostly focused towards finance and quant. QuantCon NYC is slightly different, in terms of the attendee mix. Whilst there are a lot of practitioners and academics, as you would expect, nearly a third of the attendees were also students. In this write up, I’ll give a brief review of the presentations which I attended at QuantCon.

 

The conference was opened by John Fawcett, the founder of Quantopian. He noted that Quantopian community now has nearly 200,000 members. He also discussed the recent tie up with FactSet to provide Quantopian’s software alongside FactSet’s data for doing financial analysis on the cloud. The rationale is that you don’t need to spend time managing your own data locally, and instead can rely on a clean dataset stored on the cloud. I can definitely see this type of thing getting some traction, especially for smaller funds, who don’t have the resources to manage their own infrastructure.

 

The first keynote presentation of the day was by Ernie Chan, who has written several very popular quant trading books (I have copies!). He discussed the main issue behind trying to optimise trading parameters by looking at historical P&L. The problem is that the number of signals are fewer than the number of price points we have. Hence it can be very easy to cherry pick! It is possible to use older prices, however, this might result in using a period of history which was in a very different regime. He went through possible solutions, in particular simulating synthetic price series, whose parameters are calibrated to your own time series. Then we can create as many simulated price series to do the parameter optimisation. Even then, however, he noted that we should consider using parameter points around the “optimal” region, as opposed to simply picking the mode of the distribution. In general, I would agree, I’ve never been a fan of choosing on a single parameter to use a trading model, and instead prefer models which are stable along a multitude of different parameters. He noted that there has been similar work done on the topic by Carr and Lopez de Prado. It definitely gave me a lot of ideas to apply to my own backtesting.

 

A rather different talk was given by Kerr Hatrick, from Morgan Stanley. The crux of his presentation was to discuss the the evolution of market microstructure across various equity markets, which can have a big impact on your trading costs. He looked at TOPIX, noting how there could be certain autocorrelation into the close. However, this behaviour was at odds with KOSPI, where mean-reversion tended to be more prevalent at the close. He also examined the S&P500 in a similar way, noting how spreads were rather larger at the beginning of the day and also towards the close, perhaps as we might expect. Going towards the UK, he gave a short event study showing the short term reaction of FTSE stocks to news (in this case, using sentiment scores from Bloomberg News), which showed a rather strong relationship. In all, a fascinating talk, with some fantastic animations to illustrate his observations. I do think as a whole, animation is perhaps used too little in finance, but is a natural way to present somewhat complicated results, based on time changes.

 

20180430 Jevnik

Joe Jevnik describing Osu!

 

Joe Jevnik from Quantiopian gave a presentation on something rather less finance-like, namely the musical game Osu! The idea is that you get points by clicking in time with a beat from a song. His presentation was about using neural networks to try to predict his own score for a particular song, given his own extensive data history of playing the game. He gave some code examples using Keras, the machine learning library which sits on top of TensorFlow. He used his own domain knowledge in Osu! to help select which features to examine. Whilst obviously the domain was somewhat different to science he gave some tips, which could very easily be generalised, for example, noting the old adage about garbage in (with data), leads to garbage out. In particular, it is important to understand the data and also how the data is collected. His colleague, Scott Sanderson, then discussed the use of convex optimisation in finance, and gave some interesting tips on how to use SciPy to solve various optimisation problems. He recommended using wrappers such as CVXPY to help users formulate their optimisation problems in a more user-friendly way. We all know that investors have behavourial biases. However, it can often be difficult to avoid these! Cheng Peng’s talk discussed how we can use behavioural biases, to help us construct trading strategies. He illustrated this, with some examples of earning based strategies, which used ideas such as recency bias, in the way investors might perceive the next earnings.

 

I also gave a presentation at QuantCon. My main subject was using machine readable news (in this case Bloomberg News) to trade FX. I discussed both directional signals, and also the use of news volume to understand market volatility. In particular, I discussed how news volume about ECB and FOMC meetings could be used to enhance our understanding of EUR/USD overnight volatility around these events. Lastly, I gave a summary of various Python libraries, which can be useful in financial data analysis. If you’ve got any suggestions about Python libraries you use for financial analysis, which you like, but aren’t that popular, let me know!

 

20180430 Lopez

Marcos Lopez de Prado on avoiding the pitfalls of machine learning

 

To close the day, Marcos Lopez de Prado discussed some of the pitfalls you can encounter when doing machine learning in finance, some of which were drawn from his new book Advances in Financial Machine Learning. He noted that very often the way to solve quant problems was to work together in a team, rather than in silos, an approach which was better suited to discretionary portfolio managers. Simply looking at returns in our analysis often stripped out the memory associated with financial time series, which was an important part of predicting. Rather than resampling our data chronological, why not try other techniques such as dollar weighting, so that we gave more importance to those points where there had been more information flow? He also discussed the problems with overfitting backtests. If you torture the data enough, it will tell you whatever you want! If we keep running loads of backtests, it will be inevitable that we will find something. Sometimes, it is good to give up! Indeed, I have to agree with this. The more time you end up spending trying to tweak a trading strategy, the more I think it just encourages excessive data mining. I’m not talking about the stages where we need to clean and preprocess the data, more towards the stage where we are putting together the trading rule and backtesting. When his book is out in the UK, I’ll definitely be buying a copy!

 

In summary, the conference proved quite insightful, and it definitely gave me a lot to think about, when I get back to my desk, and get to developing new quant trading strategies. Hopefully, will be back in 2019 for the next QuantCon NYC!