FX trading signals: Common sense and machine learning

Jupyter Notebook

Two valid methods to combine macro trading factors into a single signal are “conceptual parity” and machine learning. Conceptual parity takes a set of conceptually separate normalized factors and gives them equal weights. Machine learning optimizes models and derives weights sequentially, potentially with theoretical restrictions. Both methods support realistic backtests. Conceptual parity works best in the presence of strong theoretical priors. Machine learning works best with large homogenous data sets.
We apply conceptual parity, and two machine learning methods to combine 11 macro-quantamental trading factors for developed and emerging market FX forwards in 16 currencies since 2000. The signals derived by all methods have been highly significant predictors and produced material and uncorrelated risk-adjusted trading returns. Machine learning methods have failed to outperform conceptual parity, probably reflecting that theoretical priors in the FX space are abundant while data are limited and heterogeneous.

(more…)

How random forests can improve macro trading signals

Jupyter Notebook

Random forest regression combines the discovery of complex predictive relations with efficient management of the “bias-variance trade-off” of machine learning. The method is suitable for constructing macro trading signals with statistical learning, particularly when relations between macro factors and market returns are multi-faceted or non-monotonic and do not have clear theoretical priors to go on. This post shows how random forest regression can be used in a statistical learning pipeline for macro trading signals that chooses and optimizes models sequentially over time. For cross-sector equity allocation using a set of over 50 conceptual macro factors, regression trees have delivered signals with significant predictive power and economic value. Value generation has been higher and less seasonal than for statistical learning with linear regression models.

(more…)

How to build a macro trading strategy (with open-source Python)

This post is a condensed guide on best practices for developing systematic macro trading strategies with links to related resources. The focus is on delivering proofs of strategy concepts that use direct information on the macroeconomy. The critical steps of the process are (1) downloading appropriate time series data panels of macro information and target returns, (2) transforming macro information states into panels of factors, (3) combining factors into a single type of signal per traded contract, and (4) evaluating the quality of the signals in various ways.
Best practices include the formulation of theoretical priors, easily auditable code for preprocessing, visual study of data before and after transformations, signal optimisation management with statistical learning, and a protocol for dealing with rejected hypotheses. A quick, standardised and transparent process supports integrity and reduces moral hazard and data mining. Standard Python data science packages and the open-source Macrosynergy package provide all necessary functionality for efficient proofs of concept.

(more…)

Using principal components to construct macro trading signals

Jupyter Notebook

Principal Components Analysis (PCA) is a dimensionality reduction technique that condenses the key information from a large dataset into a smaller set of uncorrelated variables called “principal components.” This smaller set often functions better as features for predictive regressions, stabilizing coefficient estimates and reducing the influence of noise. In this way, principal components can improve statistical learning methods that optimize trading signals.

This post shows how principal components can serve as building blocks of trading signals for developed market interest rate swap positions, condensing the information of macro-quantamental indicators on inflation pressure, activity growth, and credit and money expansion. Compared to a simple combination of these categories, PCA-based statistical learning methods have produced materially higher predictive accuracy and backtested trading profits. PCA methods have also outperformed non-PCA-based regression learning. PCA-based statistical learning in backtesting leaves little scope for data mining or hindsight, and the discovery of trading value has high credibility.

(more…)

How to adjust regression-based trading signals for reliability

Jupyter Notebook

Regression-based statistical learning is convenient for combining candidate trading factors into single signals (view post here). Models and signals are updated sequentially using expanding time windows of empirical evidence and offering a realistic basis for backtesting. However, simple regression-based predictions disregard statistical reliability, which tends to increase as time passes or decrease after structural breaks. This short methodological post proposes signals based on regression coefficients adjusted for statistical precision. The adjustment correctly aligns intertemporal risk-taking with the predictive power of signals. PnLs become less seasonal and outperform as sample size and statistical quality grow.

(more…)

How “beta learning” improves macro trading strategies

Jupyter Notebook

Macro beta is the sensitivity of a financial contract’s return to a broad economic or market factor. Macro betas broaden the traditional concept of equity market betas and can often be estimated using financial contract baskets. Macro sensitivities are endemic in trading strategies, diluting alpha, undermining portfolio diversification, and distorting backtests. However, it is possible to immunize strategies through “beta learning,” a statistical learning method that supports identifying appropriate models and hyperparameters and allows backtesting of hedged strategies without look-ahead bias. The process can be easily implemented with existing Python classes and methods. This post illustrates the powerful beneficial impact of macro beta estimation and its application on an emerging market FX carry strategy.

(more…)

Evaluating macro trading signals in three simple steps

Jupyter Notebook

Meaningful evaluation of macro trading signals must consider their seasonality and diversity across countries. This post proposes a three-step process to this end. The first step runs significance tests of proposed predictive relations using a panel of markets. The second step reviews the reliability of predictive relations based on accuracy and different correlation metrics across time and markets. The third step estimates the economic value of the signal based on performance metrics of a standardized naïve PnL. All these steps can be implemented with special Python classes of the Macrosynergy package. Conscientious evaluation of macro signals not only benefits their selection for live trading. It also paints a realistic picture of the PnL profile, which is critical for setting risk limits and for broader portfolio integration.

(more…)

FX trading signals with regression-based learning

Jupyter Notebook

Regression-based statistical learning helps build trading signals from multiple candidate constituents. The method optimizes models and hyperparameters sequentially and produces point-in-time signals for backtesting and live trading. This post applies regression-based learning to macro trading factors for developed market FX trading, using a novel cross-validation method for expanding panel data. Sequentially optimized models consider nine theoretically valid macro trend indicators to predict FX forward returns. The learning process has delivered significant predictors of returns and consistent positive PnL generation for over 20 years. The most important macro-FX signals, in the long run, have been relative labor market trends, manufacturing business sentiment changes, relative inflation expectations, and terms of trade dynamics.

(more…)

Macroeconomic data and systematic trading strategies

While economic information undeniably wields a significant and widespread influence on financial markets, the systematic incorporation of macroeconomic data into trading strategies has thus far been limited. This reflects skepticism towards economic theory and serious data problems, such as revisions, distortions, calendar effects, and, generally, the lack of point-in-time formats. However, the emergence of industry-wide quantamental indicators and the rise of statistical learning methods in financial markets make macroeconomic information more practical and powerful. Successful demonstrations of statistical learning and macro-quantamental indicators have been achieved, with various machine learning techniques poised to further improve the utilization of economic information.

(more…)

Regression-based macro trading signals

Jupyter Notebook

Regression is one method for combining macro indicators into a single trading signal. Specifically, statistical learning based on regression can optimize model parameters and hyperparameters sequentially and produce signals based on whatever model has predicted returns best up to a point in time. This method learns from growing datasets and produces valid point-in-time signals for backtesting. However, whether regression delivers good signals depends on managing the bias-variance trade-off of machine learning. This post provides guidance on pre-selecting the right regression models and hyperparameter grids based on theory and empirical evidence. It considers the advantages and disadvantages of various regression methods, including non-negative least squares, elastic net, weighted least squares, least absolute deviations, and nearest neighbors.

(more…)