Nowcasting for financial markets

Nowcasting is a modern approach to monitoring economic conditions in real-time. It makes financial market trading more efficient because economic dynamics drive corporate profits, financial flows and policy decisions, and account for a large part of asset price fluctuations. The main technology behind nowcasting is the dynamic factor model, which condenses the information of numerous correlated ‘hard’ and ‘soft’ data series into a small number of ‘latent’ factors. A growth nowcast can be interpreted as the factor that is most correlated with a diverse representative set of growth-related data series. The state-space representation of the dynamic factor model formalizes how markets read economic data in real-time. The related estimation technique (‘Kalman filter’) generates projections for all data series and estimates for each data release a model-based surprise, called ‘news’. In recent years machine learning models, such as support vector machines, LASSO, elastic net and feed-forward artificial neural networks, have been deployed to improve the predictive power of nowcasts.

(more…)

How banks’ dollar holdings drive exchange rate dynamics

Non-U.S. financial institutions hold precautionary positions in U.S. dollar assets as protection against financial shocks. This gives rise to a safety premium on the dollar. The premium varies over time and, hence, not only accounts for contemporaneous exchange rate dynamics but also helps to predict exchange rate trends. An IMF paper measures non-U.S. banks’ dollar demand for 26 economies as the ratio of assets denominated in dollar to total assets by nationality. Demand for U.S. dollars tends to surge following negative financial market shocks and causes dollar strength. Non-U.S. holdings of dollar assets have also been a highly significant predictor of dollar trends in subsequent years. Thus, large holdings have heralded dollar depreciation in the past.

(more…)

External imbalances and FX returns

Hedge ratios of international investment positions have increased over past decades, spurred by regulation and expanding derivative markets. This has given rise to predictable movements in spot and forward exchange rates. First, on balance hedgers are long currencies with positive net international investment positions and short those with negative international investment positions. With intermediaries requiring some profit for balance sheet usage these trades command negative premia and widen cross-currency bases. Second, hedge ratios increase in times of rising FX volatility. An increase in the hedge ratio for a currency puts downward pressure on its market price in proportion to its external imbalance and bodes for higher medium-term returns. Also, the dispersion of cross-currency bases increases in times of turmoil.

(more…)

Predicting volatility with heterogeneous autoregressive models

Heterogeneous autoregressive models of realized volatility have become a popular standard in financial market research. They use high-frequency volatility measures and the assumption that traders with different time horizons perceive, react to, and cause different types of volatility components. A key hypothesis is that volatility over longer time intervals has a stronger impact on short-term volatility than vice versa. This leads to an additive volatility cascade and a simple model in autoregressive form that can be estimated with ordinary least squares regression. Natural extensions include weighted least-squares estimations, the inclusion of jump-components and the consideration of index covariances. Research papers report significant improvement of volatility forecasting performance compared to other models, across equity, fixed income, and commodity markets.

(more…)

Joint predictability of FX and bond returns

When macroeconomic conditions change rational inattention and cognitive frictions plausibly prevent markets from adjusting expectations for futures interest rates immediately and fully. This is an instance of information inefficiency. The resulting forecast errors give rise to joint predictability of currency and bond market returns. In particular, an upside shock to the rates outlook in a country heralds positive (rationally) expected returns on its currency and negative expected returns on its long-term bond. This proposition has been backed by empirical evidence for developed markets over the past 30 years.

(more…)

The predictive power score

The predictive power score is a summary metric for predictive relations between data series. Like correlation, it is suitable for quick data exploration. Unlike correlation, it can work with non-linear relations, categorical data, and asymmetric relations, where variable A informs on variable B more than variable B informs on variable A. Technically, the score is a measurement of the success of a Decision Tree model in predicting a target variable with the help of a predictor variable out-of-sample and relative to naïve approaches. For macro strategy development, predictive power score matrices can be easily created based on an existing python module and can increase the efficiency of finding hidden patterns in the data and selecting predictor variables.

(more…)

Equilibrium theory of Treasury yields

An equilibrium model for U.S. Treasury yields explains how macroeconomic trends and related expectations for future short-term interest rates shape the yield curve. Long-term yield trends arise from learning about stable components in GDP growth and inflation. They explain the steady rise of Treasury yields in the 1960s-1980s and their decline in the 1990s-2010s. Cyclical movements in yields curves result from learning about transitory deviations of GDP growth and inflation. They explain why curves have been steep out of recessions and inverted in mature economic expansions. Finally, since the 2000s pro-cyclical inflation expectations and fears for secular stagnation have accentuated the steepness of the Treasury curve; positive correlation between inflation and growth expectations means that the Fed can cut rates more drastically to support the economy.

(more…)

Factor timing

Factors beyond aggregate market risk are sources of alternative risk premia. Factor timing addresses the question when to receive and when to pay such risk premia. A new method for predicting the performance of cross-sectional equity return factors proposes to focus only on the dominant principal components of a wide array of factors. This dimension reduction seems to be critical for robust estimation. Forecasts of the dominant principal components can serve as the basis of portfolio construction. Empirical evidence suggests that predictability is significant and that market-neutral factor timing is highly valuable for portfolio construction, over and above directional market timing. Factor timing is related to macroeconomic conditions, particularly at business cycle frequency.

(more…)

Macro trading and macroeconomic trend indicators

Macroeconomic trends are powerful asset return factors because they affect risk aversion and risk-neutral valuations of securities at the same time. The influence of macroeconomics appears to be strongest over longer horizons. A macro trend indicator can be defined as an updatable time series that represents a meaningful economic trend and that can be mapped to the performance of tradable assets or derivatives positions. It can be based on three complementary types of information: economic data, financial market data, and expert judgment. Economic data establish a direct link between investment and economic reality, market data inform on the state of financial markets and economic trends that are not (yet) incorporated in economic data, and expert judgment is critical for formulating stable theories and choosing the right data sets.

(more…)

A statistical learning workflow for macro trading strategies

Statistical learning for macro trading involves model training, model validation and learning method testing. A simple workflow [1] determines form and parameters of trading models, [2] chooses the best of these models based on past out-of-sample performance, and [3] assesses the value of the deployed learning method based on further out-of-sample results. A convenient technology is the ‘list-column workflow’ based on the tidyverse packages in R. It stores all related objects in a single data table, including models and nested data sets, and implements statistical learning through functional programming on that table. Key steps are [1] the creation of point-in-time data sets that represent information available at a particular date in the past, [2] the estimation of different model types based on initial training sets prior to each point in time, [3] the evaluation of these different model types based on subsequent validation data just before each point in time, and [4] the testing of the overall learning method based on testing data at each point in time.

(more…)