Quantamental economic surprise indicators: a primer

Jupyter Notebook

Quantamental economic surprises are point-in-time measures of deviations of economic indicators from expected values. There are two types of surprises: first-print events and pure revisions. First-print events feature new observation periods, and the surprise element depends on market expectations of the indicator. Market surveys can approximate such expectations, but only for a limited number of indicators. Quantamental surprises use econometric prediction models and can be calculated for all indicators and transformations, principally using the whole information state.
This post introduces economic surprises in global industry and construction and shows how they can be transformed into short-term macro trading signals for commodities. There is clear empirical evidence for the predictive power of such surprises for a basket of industrial commodity futures at a daily and weekly frequency. Related simulated PnL generation produces risk-adjusted alpha, albeit mainly in seasons of large swings in manufacturing and construction.

(more…)

Global FX management with systematic macro scores

Jupyter Notebook

Global foreign exchange markets are subject to a wide range of macroeconomic influences. The sheer breadth of related information and required analyses often prevent their systematic use in trading. However, modern macro-quantamental scorecards can condense ample point-in-time macroeconomic data into thematic scores for easy systematic visualization and empirical evaluation.
This post demonstrates how to create structured macro-quantamental scorecards for FX forward trading in Python. It uses indicators related to economic growth differentials, monetary policy divergences, external balances, valuation metrics, and price competitiveness. Resulting scorecards provide point-in-time snapshots of macroeconomic conditions across all liquid currencies. They also summarize historical and thematic perspectives. Empirical analysis highlights the predictive power and trading value of macro-quantamental scores.

(more…)

Classifying credit markets with macro factors

Jupyter Notebook

Macro credit trades can be implemented through CDS indices. Due to obligors’ default option, long credit positions typically feature a positive mean and negative skew of returns. At the macro level, downside skew is reinforced by fragile liquidity and the potential for escalating credit crises. To enhance performance and create a chance to contain drawdowns, credit markets can be classified based on point-in-time macro factors, such as bank lending surveys, private credit dynamics, real estate price growth, business confidence dynamics, real interest rates, and credit spread dynamics. These factors support statistical learning processes that sequentially select and apply versions of four popular classification methods: naive Bayes, logistic regression, nearest neighbours, and random forest.
With only two decades and four liquid markets of CDS index trading, empirical results are still tentative. Yet they suggest that machine learning classification can detect the medium-term bias of returns and produce good monthly accuracy and balanced accuracy ratios. The random forest method stands out regarding predictive power and economic value generation.

(more…)

Macro information changes as systematic trading signals

Jupyter Notebook

Macro information state changes are point-in-time updates of recorded economic developments. They can refer to a specific indicator or a broad development, such as growth or inflation. The broader the economic concept, the higher the frequency of changes. Information state changes are valuable trading indicators. They provide daily or weekly signals and naturally thrive in periods of underestimated escalatory economic change, adding a layer of tail risk protection.
This post illustrates the application of information state changes to interest rate swap trading across developed and emerging markets, focusing on six broad macro developments: economic growth, sentiment, labour markets, inflation, and financing conditions. For trading, we introduce the concept of normalized information state changes that are comparable across economic groups and countries and, hence, can be aggregated to local and global signals. The predictive power of aggregate information state changes has been strong, with material and consistent PnL generation over the past 25 years.

(more…)

Macro-quantamental scorecards: A Python kit for fixed-income markets

 Jupyter Notebook

Macro-quantamental scorecards are condensed visualizations of point-in-time economic information for a specific financial market. Their defining characteristic is the combination of efficient presentation and evidence of empirical power. This post and the accompanying Python code show how to build scorecards for duration exposure based on six thematic scores: excess inflation, excess economic growth, overconfidence, labour market tightening, financial conditions, and government finance. All thematic scores have displayed predictive power for interest rate swap returns in the U.S. and the euro area over the past 25 years. Since economic change is often gradual and requires attention to a broad range of indicators, monitoring can be tedious and costly. The influence of such change can, therefore, build surreptitiously. Macro-quantamental scorecards cut information costs and attention time and, hence, improve the information efficiency of the investment process.

(more…)

Evaluating macro trading signals in three simple steps

Jupyter Notebook

Meaningful evaluation of macro trading signals must consider their seasonality and diversity across countries. This post proposes a three-step process to this end. The first step runs significance tests of proposed predictive relations using a panel of markets. The second step reviews the reliability of predictive relations based on accuracy and different correlation metrics across time and markets. The third step estimates the economic value of the signal based on performance metrics of a standardized naïve PnL. All these steps can be implemented with special Python classes of the Macrosynergy package. Conscientious evaluation of macro signals not only benefits their selection for live trading. It also paints a realistic picture of the PnL profile, which is critical for setting risk limits and for broader portfolio integration.

(more…)

Equity market timing: the value of consumption data

Jupyter Notebook

The dividend discount model suggests that stock prices are negatively related to expected real interest rates and positively to earnings growth. The economic position of households or consumers influences both. Consumer strength spurs demand and exerts price pressure, thus pushing up real policy rate expectations. Meanwhile, tight labor markets and high wage growth shift national income from capital to labor.
This post calculates a point-in-time score of consumer strength for 16 countries over almost three decades based on excess private consumption growth, import trends, wage growth, unemployment rates, and employment gains. This consumer strength score and most of its constituents displayed highly significant negative predictive power with regard to equity index returns. Value generation in a simple equity timing model has been material, albeit concentrated on business cycles’ early and late stages.

(more…)

Optimizing macro trading signals – A practical introduction

Jupyter Notebook

Based on theory and empirical evidence, point-in-time indicators of macroeconomic trends and states are strong candidates for trading signals. A key challenge is to select and condense them into a single signal. The simplest (and often successful) approach is conceptual risk parity, i.e., an equally weighted average of normalized scores. However, there is scope for optimization. Statistical learning offers methods for sequentially choosing the best model class and other hyperparameters for signal generation, thus supporting realistic backtests and automated operation of strategies.
This post and an attached Jupyter Notebook show implementations of sequential signal optimization with the scikit-learn package and some specialized extensions. In particular, the post applies statistical learning to sequential optimization of three important tasks: feature selection, return prediction, and market regime classification.

(more…)

Sovereign debt sustainability and CDS returns

Selling protection through credit default swaps is akin to writing put options on sovereign default. Together with tenuous market liquidity, this explains the negative skew and heavy fat tails of generic CDS (short protection or long credit) returns. Since default risk depends critically on sovereign debt dynamics, point-in-time metrics of general government debt sustainability for given market conditions are plausible trading indicators for sovereign CDS markets and do justice to the non-linearity of returns. There is strong evidence of a negative relation between increases in predicted debt ratios and concurrent returns. There is also evidence of a negative predictive relation between debt ratio changes and subsequent CDS returns. Trading these seems to produce modest but consistent alpha.

(more…)

How to measure the quality of a trading signal

The quality of a trading signal depends on its ability to predict future target returns and to generate material economic value when applied to positioning. Statistical metrics of these two properties are related but not identical. Empirical evidence must support both. Moreover, there are alternative criteria for predictive power and economic trading value, which are summarized in this post. The right choice depends on the characteristics of the trading signal and the objective of the strategy. Each strategy calls for a bespoke appropriate criterion function. This is particularly important for statistical learning that seeks to optimize hyperparameters of trading models and derive meaningful backtests.

(more…)