Home » Research Blog » How “beta learning” improves macro trading strategies

How “beta learning” improves macro trading strategies

Jupyter Notebook

Macro beta is the sensitivity of a financial contract’s return to a broad economic or market factor. Macro betas broaden the traditional concept of equity market betas and can often be estimated using financial contract baskets. Macro sensitivities are endemic in trading strategies, diluting alpha, undermining portfolio diversification, and distorting backtests. However, it is possible to immunize strategies through “beta learning,” a statistical learning method that supports identifying appropriate models and hyperparameters and allows backtesting of hedged strategies without look-ahead bias. The process can be easily implemented with existing Python classes and methods. This post illustrates the powerful beneficial impact of macro beta estimation and its application on an emerging market FX carry strategy.

The post below is based on Macrosynergy’s proprietary research.

A Jupyter notebook for audit and replication of the research results can be downloaded here. The notebook operation requires access to J.P. Morgan DataQuery to download data from JPMaQS, a premium service of quantamental indicators. J.P. Morgan offers free trials for institutional clients.
Also, an academic research support program sponsors data sets for relevant projects.

This post ties in with this site’s summary of “Quantitative Methods for Macro Information Efficiency.”

Market betas and macro betas

Traditional market beta is defined as the sensitivity of a security’s return to the return of a basket or an index representing the asset class to which the security belongs. Most often, it is the sensitivity of a single stock’s return to a broad equity index. Market beta can change in accordance with a security’s or the market’s evolving characteristics. Its true value is unknown, but estimates are usually derived through regression of the return on a market index return.

For macro trading strategies, it is helpful to broaden the concept of beta. We can define macro beta as the sensitivity of a contract’s return to the return of a basket of contracts or an index that represents a broad economic or market factor. For example, a basket of derivatives comprising the equity, credit, and FX spaces can represent directional financial market risk, i.e., exposure to un-diversifiable financial market shocks incurred by providers of capital. Similarly, returns on baskets for inflation derivatives, commodity prices, and nominal bonds can be used to represent inflation shocks.

Unless the contract whose beta is to be estimated is itself a major driver of the macro factor, the macro beta can be estimated in the same way as the traditional market beta. In this way, the macro beta can be applied to a range of derivatives, such as FX forwards and interest rate swaps of commodity futures, in much the same way it is traditionally applied to single stocks.

The importance of macro betas for macro strategies

The purpose of estimating macro betas is to reduce the interference of macro factors in the performance of strategies. Many macro strategies focus on the predictive power of idiosyncratic contract-specific signals. However, if the contract returns are strongly influenced by global macro factors that have no relation with the signals, those macro interferences can dominate profit and loss (PnL) outcomes. For example, in the below example of an emerging markets (EM) FX carry-based strategy, global directional market risk affects both the signals and the position returns.

Furthermore, a contract’s sensitivity to macro risk can also compromise the strategy signals if that macro risk influences the signal in a way that gears positioning towards or away from exposure to that macro risk. For example, if EM FX carry is positively and strongly related a currency’s sensitivity to global market risk then carry is guiding strategy exposure towards that risk rather than to currency-specific opportunities.

The interference of global macro factors is detrimental in many ways. Not only does it add undesired risk and volatility to macro strategies’ PnLs, but strategies’ dependence on global macro forces undermines portfolio diversification. Macro forces are few and pervasive, and their manifestation can hit many seemingly unrelated strategies in the same direction and at the same time. Finally, macro factors can distort the predictive power of contract-specific signals in backtests or statistical tests because their influence can overshadow that of the original signals.

A well-estimated beta can help shield macro strategies from the impact of macroeconomic factors. Rather than just taking positions in the primary contracts a “hedged strategy” also takes a corresponding position in an index or basket representing the macro factor. Moreover, signals can be adjusted in the same way if they are based on the financial contracts, as is the case with real carry (see below). However, immunization with estimated macro betas comes with risks and is not always appropriate:

  • Representing macro factors by basket returns requires a (predominant) one-way relation between the contract and the basket. Thus, if a contract’s return clearly has the characteristic of a dependent variable, its estimated betas are meaningful. However, if the contract’s return itself strongly influences the macro factor, say in the case of a crude oil contract, or if the contract and the macro factor are both exposed to a background factor regression, it will not deliver valid estimates. This is due to the notorious problems of reverse causality and omitted variables. In this case, hedging can result in unwanted relative positions of target contracts versus the basket.
  • If betas are hard to estimate and have large variance around true betas, we replace one problem with another, i.e., dependence on a macro factor with “basis risk.” This means that we may have less long-term systematic relation to a macro factor but, instead, incur sizeable exposure to the performance of the hedging basket that can be positive in some periods and negative in others. If the original true macro betas have not been large in the first place and the beta estimation is impaired problems, such as short time series and structural instability, the influence of forces unrelated to signals may actually increase.

A process for estimating beta with statistical learning

Statistical learning is suitable for beta estimation for two reasons. First, there are few statistical priors for the choice of model and hyperparameters, such as lookback periods. Second, using sequential statistically learned betas avoids data mining and look ahead bias in constructing hedged strategies. Here, we present a statistical learning process that considers various plausible base models and hyperparameters and sequentially picks the one that scored highest to estimated betas for positions in the following months.

This process essentially follows principally three steps:

  • First, we set the rules of a sequential learning process regarding the models and hyperparameters considered, the cross-validation splitters, and the criterion for model selections.
  • Second, we apply learning to expanding samples of panel time series of target and benchmark returns. This means we review the model choice periodically and estimate the beta for the next period, which, technically, is a test sample.
  • Third, we adjust target returns (and positions) and appropriate signals for the influence of the hedging basket based on the betas in the test samples.

We can implement such a process easily with the Macrosynergy package. The `BetaEstimator` class of the ` leaning` module manages the estimation betas with respect to a benchmark market return for panels of returns. The class and its methods are customized extensions of the scikit-learn package. The learning process for the panel uses versions of the seemingly unrelated regressions (SUR) method. In particular, the `estimate_beta` method sequentially identifies optimal models through cross-validation. It derives betas for the test sample, which in the below use case is the next quarter, based on the best model at each time. The models and their hyperparameters are passed to the method through a dictionary of scikit-learn compatible linear regression models and a related dictionary of hyperparameters. The method also calculates proxy “hedged” returns based on test-sample betas.

This “beta learning” requires setting up an inner splitter for the cross-validation of various models,and an outer-splitter that separates out the pure test samples. The betas and hedged returns of the test samples are eventually stacked to give a single time series that is ready for a final evaluation of the applied beta learning process. This mimics the experience in live trading, where betas are estimated periodically across time based on what is considered the best model at the time of estimation.

A suitable criterion for choosing optimal models and hyperparameters in cross-validation is the mean absolute value of the correlation of hedged returns with the benchmark market return. Neither positive nor negative correlation is desirable. The model whose beta estimate is based on the trading samples produces the lowest mean absolute correlations in the validation samples will be selected. This criterion can also be used to evaluate the success of the learning method based on the test samples. This final evaluation requires dividing the stack test samples into correlation periods, such as months or quarters. If one checked the benchmark correlation of hedged returns based on a long sample, one might disregard the periodic alternating positive and negative correlation of hedged returns with the basket and disregard the “basis risk” involved in hedging.

Applying and evaluating beta learning for EM FX forward positions

The data and the macro beta problem

We apply beta learning to the macro-quantamental version of an EM FX carry strategy. The strategy would trade FX forwards of long positions of 20 emerging markets currencies, namely:

  • BRL (Brazilian real), CLP (Chilean peso), COP (Colombian peso), IDR (Indonesian rupiah), ILS (Israel shekel), INR (Indian rupee), KRW (Korean won), MXN (Mexican peso), MYR (Malaysian ringgit), PEN (Peruvian sol), PHP (Philippine peso), THB (Thai baht), TWD (Taiwanese dollar), ZAR (South African rand), all against the U.S. dollar;
  • CZK (Czech koruna), HUF (Hungarian forint), PLN (Polish zloty), RON (Romanian leu), all against the euro;
  • RUB (Russian rouble), TRY (Turkish lira), against an equally weighted basket of dollar and euro.

For all currency areas, we take real FX carry indicators from the J.P. Morgan Macrosynergy Quantamental System (JPMaQS). A real FX carry is a 1-month forward implied carry that subtracts the differential between the expected local and benchmark inflation rate (view documentation here). The basis for the adjustment is the quantamental indicator “estimated 1-year ahead inflation expectation” (view documentation here). Being a quantamental system, JPMaQS calculates point-in-time versions of real carry and inflation expectations based on concurrent vintages of CPI data. Also, FX forward returns are taken from JPMaQS as 1-month FX forward return, in % of notional of the contract, assuming roll back to full 1-month maturity at the end of the month (view documentation here).

The hypothesis is that real forward implied carry indicates differences in monetary policy stances and risk premia and, hence, a valid predictor of future returns. However, most EM currencies are affected by shocks to global financial markets as their economies depend on foreign credit and portfolio inflows. Currency traders often take positions in certain EM forwards as implicit bets on global equity and credit markets. Moreover, the risk premium implied in a real carry may partly depend on the very sensitivity of the currency to global portfolio flows, which adds systematic global market risk and is not indicative of local policy subsidies or idiosyncratic premia. Hence, beta learning and immunization are a plausible path to enhancing the alpha of a related strategy.

We use the global directional risk basket of JPMaQS as a benchmark for global directional financial market risk. It is based on sub-baskets for three asset classes: credit, equity, and foreign exchange. The constituent asset class baskets are sets of liquid contracts whose performance metrics are good proxies for key market segments formed by macro criteria. Rebalancing is monthly, based on either fixed or inverse volatility weights. Inverse volatility weights are based on exponential lookback windows of daily returns with a half-life of 11 days. View full documentation here.

The essence of the real carry-based FX strategies is to take FX forward positions across all EM currencies following their respective real carry at the end of the previous month. In the below analysis the signal is subject to a winsorization (capping) of absolute values of real carry at 15% to exclude market distortions. The estimated betas are used to transform outright positions, i.e., positions in FX forwards alone, into “hedged” positions that take longs or shorts in the basket. This implies an adjustment of both signals (real carry) and targets (returns). The signal is adjusted for the real carry of the risk basket times the currency’s beta. The target is adjusted for the return on the hedge position.

Learning with single-frequency OLS and quantile regressions

We principally apply the statistical learning process to the 20-currency panel from 1999 to 2024 (May), subject to tradability and exchange rate flexibility as tracked by quantamental system dummies (view documentation here). The start date of the sample reflects the introduction of the euro and of reliable data on the tradability of many EM FX forward markets. The number of tradable cross-sections broadened in the 2000s, implying an imbalanced panel. The minimum requirement for the learning process has been set at 12 months of data. Betas are being re-estimated every quarter.

We consider two standard regression models for estimating betas: a least squares regression and a least absolute deviation regression (quantile regression for median, LAD). The hyperparameters are the frequency of the returns used for estimation (daily or weekly) and the lookback periods, which are either working days (21, 63, 126, 252, 504, or 756), i.e., approximately between one month and three years, or weeks (24, 48, 96, or 144). The criterion for optimizing models and hyperparameters is the negative of the mean absolute correlation of returns with the benchmark, i.e., the global directional risk basket return.

We use an expanding k-fold panel splitter (ExpandingKFoldPanelSplit class) as an inner splitter for the cross-validation of models and hyperparameters. At each stage in the sequential expansion of the sample, it divides the validation set into 5 folds of training and chronologically subsequent test sets, whereby the difference between the folds is the length of the training set. The below graph exemplifies this splitting method for the 2000-2024 sample.

A range of different model versions qualified as optimal over time. At the end of the sample of the learning process and often before, the OLS regression model (LR) with the 2-year lookback prevailed. Short lookback models rarely qualified. Both least squares and least absolute distance estimators have been chosen. (Interestingly, if one includes blacklisted periods with untradable markets and return distortions, the preferred method becomes the least absolute distance regressor (LAD), which is more robust to outliers.)

Average estimated betas have been positive for all EM currencies since 1999. The width of ranges often accords with the average magnitude of the estimated beta. Negative outliers have been rare and only recorded once, for the case of Israel. The distribution of beta estimates has been skewed towards the high side for almost all currencies.

Prior to hedging, the average monthly absolute correlation coefficients of FX forward position returns with the global directional risk basket have been 40% at a daily frequency of the underlying returns since 1999. After hedging, the average absolute correlation dropped to 15%, changing the characteristic of the target returns.

Learning with dual-frequency optimal regression combinations

The second learning process always averages daily and weekly regression estimates. This approach reflects that these two frequencies have complementary advantages for beta estimation and, hence, should plausibly be considered together. The advantage of the daily frequency is a greater number of observations over time and the ability to use short lookbacks. The advantage of the weekly frequency is less distortion from “time-zone effects”: FX forwards trade liquidly at different hours and, hence, some have only a short overlap with the risk basket assets on any given day.

Again, we consider two standard regression models for estimating betas: a least squares regression and a least absolute deviation regression. In this setting, the frequency is not a hyperparameter, as both types estimate beta based on daily and weekly returns. For each frequency, the learning process selects lookback periods as hyperparameters from the same range of values as for the single-frequency estimates. Cross-validation rules and optimization criterion are the same.

The learning process historically prefers the outlier-resilient least absolute distance regressor. Towards the end of the sample period, the preferred model became a least-squares regressor with a 1-year lookback for the daily data and a 2-year lookback for the weekly data. The learning process also preferred longer lookbacks over shorter ones. The weekly frequency lookback has never been less than 2 years and mostly 3 years.

 

Learning with separate correlation and volatility estimators

A univariate OLS regression beta can principally be decomposed into the ratio of volatility of the FX forward return to the volatility of the basket return and the Pearson correlation coefficient between the two returns. A case be made that the volatility ratios are more subject to short-term (market conditions), while correlation is more a matter of long-term structural features, such as dependence on capital inflows. Hence, the final learning process estimates the correlation and volatility of the betas separately, allowing for different lookback periods of these two estimated parameters.

The hyperparameters of the volatility ratio estimates are the return frequency (daily and weekly), the time-weighting (simple rolling or exponential moving averages), and the lookback, which is between 5 days and 1 year for daily and between 4 weeks and 1 year for the weekly frequency. The hyperparameters for the correlation are the frequency (daily and weekly), the correlation type (Pearson and Spearman), and the lookback, which is which is between 1 and 5 years.

The preferred model at the end of the sample period uses daily frequency returns with a 2-year lookback for a Pearson correlation coefficient and a 1-year rolling lookback window for volatility ratios. This method generally prefers daily over weekly-frequency returns data.

 

Comparing betas and hedged returns across learning methods

Estimated betas across learning processes are similar in long-term averages and cycles. However, there are also marked episodic differences. Also, the single-frequency betas seem most prone to volatility and occasional extreme values.

Hedging does not generally remove all large drawdowns and increases in FX forward returns. However, it does have a large impact on longer-term returns. High-beta currency returns have been much lower or even negative in the long term, echoing the observation that it has been hard to extract long-term value from just EM currencies without the implicit global market-related risk premium.

EMFX enhanced carry strategy with global risk hedge

Hedging materially alters real carry signals. It does not only reduce the position-specific real carry of longs on high-beta EM currencies but also increases the volatility of carry estimates because those now depend on changing correlations and return volatility rations.

To evaluate the predictive power of real carry, we first test the panel correlation of signals and subsequent monthly FX forward returns. Relations are visualized below for the unhedged and hedged strategies based on the three types of beta learning. Based on the Macrosynergy panel tests, all real carry signals have delivered significant predictive power. However, forward correlation has been higher for the hedged real carry signals and targets compared to the unhedged ones.

The monthly accuracy measures of real carry signals with respect to subsequent returns look similar across carry signals, all between 54% and 55%. However, there are important differences. The simple real carry signal has a strong long bias with a positive directional signal in over 75% of all months and markets. Since unhedged returns also experienced a positive 55% positive bias, the long bias leads to a high precision of positive predictions, i.e., a 57% ratio of true positives relative to all positive predictions. However, unhedged real carry has not successfully predicted negative returns, with a ratio of below 50% for true negatives to all negative predictions.

By contrast, all hedged strategies displayed above 50% positive and negative precision. Moreover, the balanced accuracy, average of correct positive and negative return predictions, has been higher for hedged strategies than for the unhedged one.

Finally, we compare stylized PnLs for unhedged and hedged real FX carry signals. Naïve PnLs are based on monthly position rebalancing in accordance with signals for all 20 EMFX forwards (or as many as are tradable), normalized and winsorized at 3 standard deviations at the end of each month. The end-of-month score is the basis for the positions of the next month under the assumption of a 1-day slippage for trading. The naïve PnL does not consider transaction costs, risk management, or compounding. For the chart below, the PnL has been scaled to an annualized volatility of 10%.

The naive PnL of a simple real FX carry strategy would have produced a 25-year Sharpe ratio of just under 0.7 and a Sortino ratio of under 1, outperforming the long-only portfolio that recorded ratios of 0.4 and 0.55, respectively. The strategy would have been quite seasonal, as most returns were accumulated in the 2000s, and the 5% best months accounted for about 72% of the total long-term PnL.

Unsurprisingly, the unhedged real carry strategy displayed strong daily correlation with global financial market benchmarks, specifically 30% correlation with both the S&P500 return and the EUR-USD forward return.

The naïve PnL of a strategy with a single-frequency hedge method has posted stronger risk-adjusted returns with less seasonality and much lower benchmark correlation. The long-term Sharpe ratio of this strategy has been near 1 and the Sortino ratio at 1.4. The strategy was profitable across the 2000s, 2010s, and 2020s (so far), and the 5% best months account for less than 50% of the overall PnL. Meanwhile, the correlation of strategy returns with the S&P500 was just 8%, while the correlation with EUR-USD returns was near zero.

The naïve PnLs for the dual-frequency hedged carry strategy and the correlation-vol separation-based hedged strategy post similar performance ratios. The dual-frequency and correlation-vol separation methods achieved marginally higher Sharpe and Sortino ratios than the single-frequency methods, with 1 and 1.5, respectively. Also, seasonality and benchmark correlations are similarly low.

Overall, hedging with all learning processes has produced far better risk-adjusted return metrics than the standard unhedged strategy, materially less seasonality, and near zero correlations with the financial market benchmark. Importantly, the exact methods used for beta learning have not had a decisive bearing on strategy performance: any reasonable learning approach would have produced the bulk of the improvement.

Share

Related articles