Contents
While economic information undeniably wields a significant and widespread influence on financial markets, the systematic incorporation of macroeconomic data into trading strategies has thus far been limited. This reflects skepticism towards economic theory and serious data problems, such as revisions, distortions, calendar effects, and, generally, the lack of point-in-time formats. However, the emergence of industry-wide quantamental indicators and the rise of statistical learning methods in financial markets make macroeconomic information more practical and powerful. Successful demonstrations of statistical learning and macro-quantamental indicators have been achieved, with various machine learning techniques poised to further improve the utilization of economic information.
The principal case for incorporating macroeconomic information into trading strategies has long been compelling. Economic theory shows that market prices balance a broader macroeconomic equilibrium and, hence, depend on economic states and shocks. Meanwhile, the full information efficiency of the broader market is unlikely due to research costs and attention limitations (view post here). Discretionary trading, rooted in macroeconomic fundamentals, has a long history and has been the catalyst for numerous successes in the hedge fund industry. Furthermore, trading based on macroeconomic information is not a zero-sum game. Trading profits are not solely derived from the losses of others but are also paid out of the economic gains from a faster and smoother alignment of market prices with economic conditions. Therefore, technological advancements in this field can increase the value generation or “alpha” of the asset management industry overall (view post here).
And yet, macroeconomic data have hitherto played a very modest role in systematic trading. This reflects two major obstacles.
Generally, data wrangling means transforming raw, irregular data into clean, tidy data sets. In many fields of research, this requires mainly reformatting and relabelling. For macroeconomic trading indicators, the wrangling and preparation of data is a lot more comprehensive:
News and comments are major drivers for asset prices, probably more so than conventional price and economic data. Yet, no financial professional can read and analyze the vast flow of verbal information. Therefore, comprehensive news analysis is increasingly becoming the domain of natural language processing, a technology that supports the quantitative evaluation of humans’ natural language (view post here). Natural language processing delivers textual information in a structured form that makes it usable for financial market analysis. A range of useful packages is available for extracting and analyzing financial news and comments.
Overall, statistical programming nowadays allows the construction of quantamental systems (view post here). A quantamental system combines customized, high-quality databases and statistical programming outlines in order to systematically investigate relations between market returns and plausible predictors. The term “quantamental” refers to a joint quantitative and fundamental approach to investing.
Macro quantamental indicators record the market’s information state with respect to macroeconomic activity, balance sheets, and sentiment. Quantamental indicators are distinct from regular economic time series insofar as they represent information that was available at the time of reference. Consequently, indicator values are comparable to market price data and are well-suited for backtesting trading ideas and implementing algorithmic strategies.
Quantamental indicators increase the market’s macro information efficiency (and trading profits) for two simple reasons:
The main source of macro quantamental information for institutional investors is the J.P. Morgan Macrosynergy Quantamental System (JPMaQS). It is a service that makes it easy to use quantitative-fundamental (“quantamental”) information for financial market trading. With JPMaQS, users can access a wide range of relevant macro quantamental data that are designed for algorithmic strategies, as well as for backtesting macro trading principles in general.
Quantamental indicators are principally based on a two-dimensional data set.
For any given real-time date, a quantamental indicator is calculated based on the full information state, typically a time series that may be based on other time series and estimates that would be available at or before the real-time date. This information state-contingent time series is called a data vintage.
The two-dimensional structure of the data means that, unlike regular time series, quantamental indicators convey information on two types of changes: changes in reported values and reported changes in values. The time series of the quantamental indicator itself shows changes in reports arising from updates in the market’s information state. By contrast, quantamental indicators of changes are reported dynamics based on the latest information state alone.
Statistical learning refers to a set of tools or models that help extract insights from datasets, such as macro-quantamental indicators. Not only does statistical learning support the estimation of relations across variables (parameters), but it also governs the choice of models for such estimates (hyperparameters). Moreover, for macro trading, statistical learning has another major benefit: it allows realistic backtesting. Rather than choosing models and features arbitrarily and potentially with hindsight, statistical learning can simulate a rational rules-based choice of method in the past. Understanding statistical learning is critical in modern financial markets, even for non-quants(view post here). This is because statistical learning illustrates and replicates how investors’ experiences in markets shape their future behavior.
Within statistical learning pipelines, simple and familiar econometric models can be deployed to simulate point-in-time economic analysis.
Dimension reduction methods not only help to condense information about predictors of trading strategies but also support portfolio construction. In particular, they are suited for detecting latent factors of a broad set of asset prices (view post here). These factors can be used to improve estimates of the covariance structure of these prices and – by extension – to improve the construction of a well-diversified minimum variance portfolio (view post here).
Compared with other research fields, data on the relation between macroeconomic developments and modern financial market returns are scant. This reflects the limited history of modern derivatives markets and the rarity of critical macroeconomic events, such as business cycles, policy changes, or financial crises. Superficially, it seems that many data series and data points are available. However, occurrences of major shocks and trends are limited.
The scarcity of major economic events has two major consequences for the application of statistical learning to macro trading strategies:
Statistical learning with reasonable and logical priors for model choice can support trading signal generation through sequential optimization based on panel cross-validation for the purpose of trading signal selection, return prediction, and market regime classification (view post here). This approach can broadly be summarized in six steps:
Machine learning encompasses statistical learning methods but partly automates the construction of forecast models through the study of data patterns, the selection of the best functional form for a given level of complexity, and the selection of the best level of complexity for out-of-sample forecasting. Machine learning can add efficiency to classical asset pricing models, such as factor models and macro trading rules, mainly because it is flexible, adaptable, and generalizes knowledge well (view post here). Machine learning is conventionally divided into three main fields: supervised learning, unsupervised learning, and reinforcement learning.
Artificial neural networks have become increasingly practical for (supervised and unsupervised) macro trading research. Neural networks are adaptive machine learning methods that use interconnected layers of neurons. Any given layer of n neurons refers to n learned features. These are passed through a linear map, followed by a one-to-one nonlinear activation function, to form k neurons in the next layer representing a collection of k transformed features. Learning corresponds to finding an optimal collection of trainable weights. Neural networks learn by finding activation function weights and biases through training data.
Recurrent neural networks are a class of neural networks designed to model sequence data such as time series. Specialized recurrent neural networks have been developed to retain longer memory, particularly LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit). The advantage of neural networks is their flexibility in including complex interactions of features, non-linear effects, and various types of non-price information.
Neural networks for financial market trading can be implemented in Python with TensorFlow or PyTorch. For example, neural networks can principally be used to estimate the state of the market on a daily or higher frequency based on an appropriate feature space, i.e., data series that characterize the market (view post here). Also, they have gained prominence for predicting the realized volatility of asset prices (view post here). Beyond, neural networks can be used to detect lagged correlations between different asset prices (view post here) or market price distortions (view post here).
Backtesting refers to calculations of theoretical profits and losses that would have arisen from applying an algorithmic trading strategy in the past. Its function is to assess the quality of a trading strategy in the future. Statistical programming has made backtesting easy. However, its computational power and convenience can also be corrosive to the investment process due to its tendency to hug temporary patterns, while data samples for cross-validation are limited. Moreover, the business of algorithmic trading strategies, unfortunately, provides strong incentives for overfitting models and embellishing backtests (view post here). Similarly, academic researchers in the field of trading factors often feel compelled to resort to data mining in order to produce publishable ‘significant’ empirical findings (view post here).
Good backtests require sound principles and integrity (view post here). Sound principles should include [1] formulating a logical economic theory upfront, [2] choosing sample data upfront, [3] keeping the model simple and intuitive, and [4] limiting tryouts when testing ideas. Realistic performance expectations of trading strategies should be based on a range of plausible versions of a strategy, not an optimized one. Bayesian inference works well for that approach, as it estimates both the performance parameters and their uncertainty. The most important principle of all is integrity: aiming to produce good research rather than good backtests and to communicate statistical findings honestly rather than selling them.
One of the greatest ills of classical market prediction models is exaggerated performance metrics that arise from choosing the model structure with hindsight. Even if backtests estimate model parameters sequentially and apply them strictly out of sample, the choice of hyperparameters is often made with full knowledge of the history of markets and economies. For example, the type of estimation, the functional form, and – most importantly – the set of considered features are often chosen with hindsight. This hindsight bias can be reduced by sequential hyperparameter tuning or ensemble methods.
Integrating transaction costs into the development process of algorithmic trading strategies can be highly beneficial. One can use a “portfolio machine learning method” to that end (view post here).
This post is a condensed guide on best practices for developing systematic...
Jupyter Notebook Principal Components Analysis (PCA) is a dimensionality reduction technique that...
Macro-quantamental indicators and trading signals are transformative technologies for asset management. That...
Jupyter Notebook Macro information state changes are point-in-time updates of recorded economic...
Jupyter Notebook Rising inflation is a natural headwind for equity markets in...
Jupyter Notebook of factor calculation Jupyter Notebook of statistical learning There is...
Macrosynergy is a London based macroeconomic research and technology company whose founders have developed and employed macro quantamental investment strategies in liquid, tradable asset classes, across many markets and for a variety of different factors to generate competitive, uncorrelated investment returns for institutional investors for two decades. Our quantitative-fundamental (quantamental) computing system tracks a broad range of real-time macroeconomic trends in developed and emerging countries, transforming them into macro systematic quantamental investment strategies. In June 2020 Macrosynergy and J.P. Morgan started a collaboration to scale the quantamental system and to popularize tradable economics across financial markets.