Nowcasting for financial markets

Nowcasting is a modern approach to monitoring economic conditions in real-time. It makes financial market trading more efficient because economic dynamics drive corporate profits, financial flows and policy decisions, and account for a large part of asset price fluctuations. The main technology behind nowcasting is the dynamic factor model, which condenses the information of numerous correlated ‘hard’ and ‘soft’ data series into a small number of ‘latent’ factors. A growth nowcast can be interpreted as the factor that is most correlated with a diverse representative set of growth-related data series. The state-space representation of the dynamic factor model formalizes how markets read economic data in real-time. The related estimation technique (‘Kalman filter’) generates projections for all data series and estimates for each data release a model-based surprise, called ‘news’. In recent years machine learning models, such as support vector machines, LASSO, elastic net and feed-forward artificial neural networks, have been deployed to improve the predictive power of nowcasts.

(more…)

Joint predictability of FX and bond returns

When macroeconomic conditions change rational inattention and cognitive frictions plausibly prevent markets from adjusting expectations for futures interest rates immediately and fully. This is an instance of information inefficiency. The resulting forecast errors give rise to joint predictability of currency and bond market returns. In particular, an upside shock to the rates outlook in a country heralds positive (rationally) expected returns on its currency and negative expected returns on its long-term bond. This proposition has been backed by empirical evidence for developed markets over the past 30 years.

(more…)

Lagged correlation between asset prices

Efficient market theory assumes that all market prices incorporate all information at the same time. Realistically, different market segments focus on different news flows, depending on the nature of the traded security and their research capacity. Such specialization makes it plausible that lagged correlations arise between securities prices, even though their specifics may change overtime. Indeed, there is empirical evidence for lagged correlation between the price trends of different U.S. stocks. Such lagged correlation can be identified and tested through a neural network. Academic research finds that price trends of some stocks have been predictable out-of-sample based on information about the price trends of others.

(more…)

Tracking investor expectations with ETF data

Retail investors’ return expectations affect market momentum and risk premia. The rise of ETFs with varying and inverse leverage offers an opportunity to estimate the distribution of such expectations based on actual transactions. A new paper shows how to do this through ETFs that track the S&P 500. The resulting estimates are correlated with investor sentiment surveys but more informative. An important empirical finding is that expectations are extrapolating past price actions. After a negative return shock, investor beliefs become more pessimistic on average, more dispersed, and more negatively skewed.

(more…)

The q-factor model for equity returns

Investment-based capital asset pricing looks at equity returns from the angle of issuers, rather than investors. It is based on the cost of capital and the net present value rule of corporate finance. The q-factor model is an implementation of investment capital asset pricing that explains many empirical features of relative equity returns. In particular, the model proposes that the following factors support outperformance of stocks: low investment, high profitability, high expected growth, low valuation ratios, low long-term prior returns, and positive momentum. According to its proponents, the investment CAPM and q-factor model complement the classical consumption-based CAPM and explain why many so-called ‘anomalies’ are actually consistent with efficient markets.

(more…)

The predictive superiority of ensemble methods for CDS spreads

Through ‘R’ and ‘Python’ one can apply a wide range of methods for predicting financial market variables. Key concepts include penalized regression, such as Ridge and LASSO, support vector regression, neural networks, standard regression trees, bagging, random forest, and gradient boosting. The latter three are ensemble methods, i.e. machine learning techniques that combine several base models in order to produce one optimal prediction. According to a new paper, these ensemble methods scored a decisive win in the nowcasting and out-of-sample prediction of credit spreads. One apparent reason is the importance of non-linear relations in times of high volatility.

(more…)

Basic factor investment for bonds

Popular factors for government bond investment are “carry”, “momentum”, “value” and “defensive”. “Carry” depends on the steepness of the yield curve, which to some extent reflects aversion to risk and volatility. “Momentum” relates to medium-term directional trends, which in the case of fixed income are often propagated by fundamental economic changes. “Value” compares yields against a fundamental anchor, albeit some approaches are as rough as medium-term mean reversion. Finally, “defensive” seeks to benefit from some bonds’ status as a “safe haven” in crisis times. A historic analysis over the past 50 years suggests that all of these factors have been relevant in some form. Yet, without more precise and compelling macroeconomic rationale factor investing may lack stability of performance in the medium term. The scope for theory-guided improvement seems vast.

(more…)

A method for de-trending asset prices

Financial market prices and return indices are non-stationary time series, even in logarithmic form. This means not only that they are drifting, but also that their distribution changes overtime. The main purpose of de-trending is to mitigate the effects of non-stationarity on estimated price or return distribution. De-trending can also support the design of trading strategies. The simplest basis for estimating trends is to subtract moving averages. The key challenge is to pick the appropriate average window, which must be long enough to detect a trend and short enough to make the de-trended data stationary. A neat method is to pick the window based on the kurtosis criterion, i.e. choosing the window length that brings the ‘fatness of tails’ of de-trended data to what it should look like under a normal distribution.

(more…)

Tradable economics

Tradable economics is a technology for building systematic trading strategies based on economic data. Economic data are statistics that – unlike market prices – directly inform on economic activity. Tradable economics is not a zero-sum game. Trading profits are ultimately paid out of the economic gains from a faster and smoother alignment of market prices with economic conditions. Hence, technological advances in the field increase the value generation or “alpha” of the asset management industry overall. This suggests that the technology is highly scalable. One critical step is to make economic data applicable to systematic trading or trading support tools, which requires considerable investment in data wrangling, transformation, econometric estimation, documentation, and economic research.

(more…)

Reinforcement learning and its potential for trading systems

In general, machine learning is a form of artificial intelligence that allows computers to improve the performance of a task through data, without being directly programmed. Reinforcing learning is a specialized application of (deep) machine learning that interacts with the environment and seeks to improve on the way it performs a task so as to maximize its reward. The computer employs trial and error. The model designer defines the reward but gives no clues as to how to solve the problem. Reinforcement learning holds potential for trading systems because markets are highly complex and quickly changing dynamic systems. Conventional forecasting models have been notoriously inadequate. A self-adaptive approach that can learn quickly from the outcome of actions may be more suitable. A recent paper proposes a reinforcement learning algorithm for that purpose.

(more…)